Divide and Slide: Layer-Wise Refinement for Output Range Analysis of Deep Neural Networks

Chao Huang*, Jiameng Fan, Xin Chen, Wenchao Li, Qi Zhu

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

12 Scopus citations

Abstract

In this article, we present a layer-wise refinement method for neural network output range analysis. While approaches such as nonlinear programming (NLP) can directly model the high nonlinearity brought by neural networks in output range analysis, they are known to be difficult to solve in general. We propose to use a convex polygonal relaxation (overapproximation) of the activation functions to cope with the nonlinearity. This allows us to encode the relaxed problem into a mixed-integer linear program (MILP), and control the tightness of the relaxation by adjusting the number of segments in the polygon. Starting with a segment number of 1 for each neuron, which coincides with a linear programming (LP) relaxation, our approach selects neurons layer by layer to iteratively refine this relaxation. To tackle the increase of the number of integer variables with tighter refinement, we bridge the propagation-based method and the programming-based method by dividing and sliding the layer-wise constraints. Specifically, given a sliding number $s$ , for the neurons in layer $l$ , we only encode the constraints of the layers between $l-s$ and $l$. We show that our overall framework is sound and provides a valid overapproximation. Experiments on deep neural networks demonstrate significant improvement on output range analysis precision using our approach compared to the state-of-the-art.

Original languageEnglish (US)
Article number9211410
Pages (from-to)3323-3335
Number of pages13
JournalIEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems
Volume39
Issue number11
DOIs
StatePublished - Nov 2020

Funding

This work was supported in part by NSF under Grant 1834701, Grant 1834324, Grant 1839511, Grant 1724341, and 1646497; in part by Office of Naval Research under Grant N00014-19-1-2496; in part by the U.S. Air Force Research Laboratory under Contract FA8650-16-C-2642; and in part by the DARPA BRASS Program under Agreement FA8750-16-C- 0043. Manuscript received April 17, 2020; revised June 17, 2020; accepted July 6, 2020. Date of publication October 2, 2020; date of current version October 27, 2020. This work was supported in part by NSF under Grant 1834701, Grant 1834324, Grant 1839511, Grant 1724341, and 1646497; in part by Office of Naval Research under Grant N00014-19-1-2496; in part by the U.S. Air Force Research Laboratory under Contract FA8650-16-C-2642; and in part by the DARPA BRASS Program under Agreement FA8750-16-C-0043. This article was presented in the International Conference on Embedded Software 2020 and appears as part of the ESWEEK-TCAD special issue. (Corresponding author: Chao Huang.) Chao Huang and Qi Zhu are with the Department of Electrical and Computer Engineering, Northwestern University, Evanston, IL 60208 USA (e-mail: [email protected]; [email protected]).

Keywords

  • Linear programming (LP)
  • mixed-integer linear programming (MILP)
  • neural networks
  • output range analysis
  • refinement

ASJC Scopus subject areas

  • Software
  • Computer Graphics and Computer-Aided Design
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'Divide and Slide: Layer-Wise Refinement for Output Range Analysis of Deep Neural Networks'. Together they form a unique fingerprint.

Cite this