TY - JOUR
T1 - Hierarchical deep-learning neural networks
T2 - finite elements and beyond
AU - Zhang, Lei
AU - Cheng, Lin
AU - Li, Hengyang
AU - Gao, Jiaying
AU - Yu, Cheng
AU - Domel, Reno
AU - Yang, Yang
AU - Tang, Shaoqiang
AU - Liu, Wing Kam
N1 - Funding Information:
R. Domel is supported by Research Experience for Undergraduate supplement to U.S. National Science Foundation (NSF) grant number CMMI-1762035 (summer 2019). W. K. Liu and H.Y. Li are also supported by the same NSF grant. L. Cheng, J. Gao, and C. Yu conducted this research as an unfunded, exploratory (interdisciplinary) collaboration between Northwestern University and Peking University. L. Zhang, Y. Yang, and S. Tang are supported by National Natural Science Foundation of China grant number 11832001, 11988102 and 11890681. We would like to acknowledge the nice comments from Dr. Ye Lu.
Funding Information:
R. Domel is supported by Research Experience for Undergraduate supplement to U.S. National Science Foundation (NSF) grant number CMMI-1762035 (summer 2019). W. K. Liu and H.Y. Li are also supported by the same NSF grant. L. Cheng, J. Gao, and C. Yu conducted this research as an unfunded, exploratory (interdisciplinary) collaboration between Northwestern University and Peking University. L. Zhang, Y. Yang, and S. Tang are supported by National Natural Science Foundation of China grant number 11832001, 11988102 and 11890681. We would like to acknowledge the nice comments from Dr. Ye Lu.
Publisher Copyright:
© 2020, Springer-Verlag GmbH Germany, part of Springer Nature.
PY - 2021/1
Y1 - 2021/1
N2 - The hierarchical deep-learning neural network (HiDeNN) is systematically developed through the construction of structured deep neural networks (DNNs) in a hierarchical manner, and a special case of HiDeNN for representing Finite Element Method (or HiDeNN-FEM in short) is established. In HiDeNN-FEM, weights and biases are functions of the nodal positions, hence the training process in HiDeNN-FEM includes the optimization of the nodal coordinates. This is the spirit of r-adaptivity, and it increases both the local and global accuracy of the interpolants. By fixing the number of hidden layers and increasing the number of neurons by training the DNNs, rh-adaptivity can be achieved, which leads to further improvement of the accuracy for the solutions. The generalization of rational functions is achieved by the development of three fundamental building blocks of constructing deep hierarchical neural networks. The three building blocks are linear functions, multiplication, and inversion. With these building blocks, the class of deep learning interpolation functions are demonstrated for interpolation theories such as Lagrange polynomials, NURBS, isogeometric, reproducing kernel particle method, and others. In HiDeNN-FEM, enrichment functions through the multiplication of neurons is equivalent to the enrichment in standard finite element methods, that is, generalized, extended, and partition of unity finite element methods. Numerical examples performed by HiDeNN-FEM exhibit reduced approximation error compared with the standard FEM. Finally, an outlook for the generalized HiDeNN to high-order continuity for multiple dimensions and topology optimizations are illustrated through the hierarchy of the proposed DNNs.
AB - The hierarchical deep-learning neural network (HiDeNN) is systematically developed through the construction of structured deep neural networks (DNNs) in a hierarchical manner, and a special case of HiDeNN for representing Finite Element Method (or HiDeNN-FEM in short) is established. In HiDeNN-FEM, weights and biases are functions of the nodal positions, hence the training process in HiDeNN-FEM includes the optimization of the nodal coordinates. This is the spirit of r-adaptivity, and it increases both the local and global accuracy of the interpolants. By fixing the number of hidden layers and increasing the number of neurons by training the DNNs, rh-adaptivity can be achieved, which leads to further improvement of the accuracy for the solutions. The generalization of rational functions is achieved by the development of three fundamental building blocks of constructing deep hierarchical neural networks. The three building blocks are linear functions, multiplication, and inversion. With these building blocks, the class of deep learning interpolation functions are demonstrated for interpolation theories such as Lagrange polynomials, NURBS, isogeometric, reproducing kernel particle method, and others. In HiDeNN-FEM, enrichment functions through the multiplication of neurons is equivalent to the enrichment in standard finite element methods, that is, generalized, extended, and partition of unity finite element methods. Numerical examples performed by HiDeNN-FEM exhibit reduced approximation error compared with the standard FEM. Finally, an outlook for the generalized HiDeNN to high-order continuity for multiple dimensions and topology optimizations are illustrated through the hierarchy of the proposed DNNs.
KW - Data-driven
KW - Fundamental building block
KW - Neural network interpolation functions
KW - Rational functions (i.e. RKPM, NURBS and IGA)
KW - r- and rh-adaptivity
UR - http://www.scopus.com/inward/record.url?scp=85092533398&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85092533398&partnerID=8YFLogxK
U2 - 10.1007/s00466-020-01928-9
DO - 10.1007/s00466-020-01928-9
M3 - Article
AN - SCOPUS:85092533398
SN - 0178-7675
VL - 67
SP - 207
EP - 230
JO - Computational Mechanics
JF - Computational Mechanics
IS - 1
ER -