TY - GEN
T1 - Optimization for large-scale machine learning with distributed features and observations
AU - Nathan, Alexandros
AU - Klabjan, Diego
N1 - Publisher Copyright:
© Springer International Publishing AG 2017.
PY - 2017
Y1 - 2017
N2 - As the size of modern data sets exceeds the disk and memory capacities of a single computer, machine learning practitioners have resorted to parallel and distributed computing. Given that optimization is one of the pillars of machine learning and predictive modeling, distributed optimization methods have recently garnered ample attention in the literature. Although previous research has mostly focused on settings where either the observations, or features of the problem at hand are stored in distributed fashion, the situation where both are partitioned across the nodes of a computer cluster (doubly distributed) has barely been studied. In this work we propose two doubly distributed optimization algorithms. The first one falls under the umbrella of distributed dual coordinate ascent methods, while the second one belongs to the class of stochastic gradient/coordinate descent hybrid methods. We conduct numerical experiments in Spark using real-world and simulated data sets and study the scaling properties of our methods. Our empirical evaluation of the proposed algorithms demonstrates the outperformance of a block distributed ADMM method, which, to the best of our knowledge is the only other existing doubly distributed optimization algorithm.
AB - As the size of modern data sets exceeds the disk and memory capacities of a single computer, machine learning practitioners have resorted to parallel and distributed computing. Given that optimization is one of the pillars of machine learning and predictive modeling, distributed optimization methods have recently garnered ample attention in the literature. Although previous research has mostly focused on settings where either the observations, or features of the problem at hand are stored in distributed fashion, the situation where both are partitioned across the nodes of a computer cluster (doubly distributed) has barely been studied. In this work we propose two doubly distributed optimization algorithms. The first one falls under the umbrella of distributed dual coordinate ascent methods, while the second one belongs to the class of stochastic gradient/coordinate descent hybrid methods. We conduct numerical experiments in Spark using real-world and simulated data sets and study the scaling properties of our methods. Our empirical evaluation of the proposed algorithms demonstrates the outperformance of a block distributed ADMM method, which, to the best of our knowledge is the only other existing doubly distributed optimization algorithm.
KW - Big data
KW - Distributed optimization
KW - Machine learning
KW - Spark
UR - http://www.scopus.com/inward/record.url?scp=85025144345&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85025144345&partnerID=8YFLogxK
U2 - 10.1007/978-3-319-62416-7_10
DO - 10.1007/978-3-319-62416-7_10
M3 - Conference contribution
AN - SCOPUS:85025144345
SN - 9783319624150
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 132
EP - 146
BT - Machine Learning and Data Mining in Pattern Recognition - 13th International Conference, MLDM 2017, Proceedings
A2 - Perner, Petra
PB - Springer Verlag
T2 - 13th International Conference on Machine Learning and Data Mining in Pattern Recognition, MLDM 2017
Y2 - 15 July 2017 through 20 July 2017
ER -