TY - CONF
T1 - GLAD
T2 - 8th International Conference on Learning Representations, ICLR 2020
AU - Shrivastava, Harsh
AU - Chen, Xinshi
AU - Chen, Binghong
AU - Lan, Guanghui
AU - Aluru, Srinivas
AU - Liu, Han
AU - Song, Le
N1 - Funding Information:
We thank our colleague Haoran Sun for his helpful comments. This research was supported in part through research cyberinfrastructure resources and services provided by the Partnership for an Advanced Computing Environment (PACE) at the Georgia Institute of Technology, Atlanta, Georgia, USA (PACE, 2017). This research was also partly supported by XSEDE Campus Champion Grant GEO150002.
Publisher Copyright:
© 2020 8th International Conference on Learning Representations, ICLR 2020. All rights reserved.
PY - 2020
Y1 - 2020
N2 - Recovering sparse conditional independence graphs from data is a fundamental problem in machine learning with wide applications. A popular formulation of the problem is an `1 regularized maximum likelihood estimation. Many convex optimization algorithms have been designed to solve this formulation to recover the graph structure. Recently, there is a surge of interest to learn algorithms directly based on data, and in this case, learn to map empirical covariance to the sparse precision matrix. However, it is a challenging task in this case, since the symmetric positive definiteness (SPD) and sparsity of the matrix are not easy to enforce in learned algorithms, and a direct mapping from data to precision matrix may contain many parameters. We propose a deep learning architecture, GLAD, which uses an Alternating Minimization (AM) algorithm as our model inductive bias, and learns the model parameters via supervised learning. We show that GLAD learns a very compact and effective model for recovering sparse graphs from data.
AB - Recovering sparse conditional independence graphs from data is a fundamental problem in machine learning with wide applications. A popular formulation of the problem is an `1 regularized maximum likelihood estimation. Many convex optimization algorithms have been designed to solve this formulation to recover the graph structure. Recently, there is a surge of interest to learn algorithms directly based on data, and in this case, learn to map empirical covariance to the sparse precision matrix. However, it is a challenging task in this case, since the symmetric positive definiteness (SPD) and sparsity of the matrix are not easy to enforce in learned algorithms, and a direct mapping from data to precision matrix may contain many parameters. We propose a deep learning architecture, GLAD, which uses an Alternating Minimization (AM) algorithm as our model inductive bias, and learns the model parameters via supervised learning. We show that GLAD learns a very compact and effective model for recovering sparse graphs from data.
UR - http://www.scopus.com/inward/record.url?scp=85150621863&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85150621863&partnerID=8YFLogxK
M3 - Paper
AN - SCOPUS:85150621863
Y2 - 30 April 2020
ER -