TY - JOUR
T1 - Integer Programming for Learning Directed Acyclic Graphs from Continuous Data
AU - Manzour, Hasan
AU - Küçükyavuz, Simge
AU - Wu, Hao-Hsiang
AU - Shojaie, Ali
PY - 2021/1
Y1 - 2021/1
N2 - Learning directed acyclic graphs (DAGs) from data is a challenging task both in theory and in practice, because the number of possible DAGs scales superexponentially with the number of nodes. In this paper, we study the problem of learning an optimal DAG from continuous observational data. We cast this problem in the form of a mathematical programming model that can naturally incorporate a superstructure to reduce the set of possible candidate DAGs. We use a negative log-likelihood score function with both [Formula: see text] and [Formula: see text] penalties and propose a new mixed-integer quadratic program, referred to as a layered network (LN) formulation. The LN formulation is a compact model that enjoys as tight an optimal continuous relaxation value as the stronger but larger formulations under a mild condition. Computational results indicate that the proposed formulation outperforms existing mathematical formulations and scales better than available algorithms that can solve the same problem with only [Formula: see text] regularization. In particular, the LN formulation clearly outperforms existing methods in terms of computational time needed to find an optimal DAG in the presence of a sparse superstructure.
AB - Learning directed acyclic graphs (DAGs) from data is a challenging task both in theory and in practice, because the number of possible DAGs scales superexponentially with the number of nodes. In this paper, we study the problem of learning an optimal DAG from continuous observational data. We cast this problem in the form of a mathematical programming model that can naturally incorporate a superstructure to reduce the set of possible candidate DAGs. We use a negative log-likelihood score function with both [Formula: see text] and [Formula: see text] penalties and propose a new mixed-integer quadratic program, referred to as a layered network (LN) formulation. The LN formulation is a compact model that enjoys as tight an optimal continuous relaxation value as the stronger but larger formulations under a mild condition. Computational results indicate that the proposed formulation outperforms existing mathematical formulations and scales better than available algorithms that can solve the same problem with only [Formula: see text] regularization. In particular, the LN formulation clearly outperforms existing methods in terms of computational time needed to find an optimal DAG in the presence of a sparse superstructure.
UR - https://www.mendeley.com/catalogue/02fbb7cb-3959-3999-b90a-c36d5e8f1975/
U2 - 10.1287/ijoo.2019.0040
DO - 10.1287/ijoo.2019.0040
M3 - Article
SN - 2575-1484
VL - 3
SP - 46
EP - 73
JO - INFORMS Journal on Optimization
JF - INFORMS Journal on Optimization
IS - 1
ER -