Interpretable Architecture Neural Networks for Function Visualization

Shengtong Zhang, Daniel W. Apley*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

1 Scopus citations

Abstract

In many scientific research fields, understanding and visualizing a black-box function in terms of the effects of all the input variables is of great importance. Existing visualization tools do not allow one to visualize the effects of all the input variables simultaneously. Although one can select one or two of the input variables to visualize via a 2D or 3D plot while holding other variables fixed, this presents an oversimplified and incomplete picture of the model. To overcome this shortcoming, we present a new visualization approach using an Interpretable Architecture Neural Network (IANN) to visualize the effects of all the input variables directly and simultaneously. We propose two interpretable structures, each of which can be conveniently represented by a specific IANN, and we discuss a number of possible extensions. We also provide a Python package to implement our proposed method. The supplemental materials are available online.

Original languageEnglish (US)
Pages (from-to)1258-1271
Number of pages14
JournalJournal of Computational and Graphical Statistics
Volume32
Issue number4
DOIs
StatePublished - 2023

Funding

This work was funded in part by the Air Force Office of Scientific Research Grant # FA9550-18-1-0381, which we gratefully acknowledge.

Keywords

  • Function visualization
  • Interpretable machine learning
  • Neural network

ASJC Scopus subject areas

  • Statistics and Probability
  • Discrete Mathematics and Combinatorics
  • Statistics, Probability and Uncertainty

Fingerprint

Dive into the research topics of 'Interpretable Architecture Neural Networks for Function Visualization'. Together they form a unique fingerprint.

Cite this