TY - JOUR
T1 - A Fast Distributed Asynchronous Newton-Based Optimization Algorithm
AU - Mansoori, Fatemeh
AU - Wei, Ermin
N1 - Funding Information:
Manuscript received January 8, 2019; accepted July 20, 2019. Date of publication August 6, 2019; date of current version June 27, 2020. This work was supported in part by Leslie and Mac McQuown and in part by the Defense Advanced Research Projects Agency under Contract Langrange HR-001117S0039. Recommended by Associate Editor G. Notarstefano. (Corresponding author: Fatemeh Mansoori.) The authors are with the Department of Electrical and Computer Engineering, Northwestern University, Evanston, IL 60208 USA (e-mail:, fatemehmansoori2019@u.northwestern.edu; ermin.wei@northwestern. edu).
Publisher Copyright:
© 1963-2012 IEEE.
PY - 2020/7
Y1 - 2020/7
N2 - One of the most important problems in the field of distributed optimization is the problem of minimizing a sum of local convex objective functions over a networked system. Most of the existing work in this area focuses on developing distributed algorithms in a synchronous setting under the presence of a central clock, where the agents need to wait for the slowest one to finish the update, before proceeding to the next iterate. Asynchronous distributed algorithms remove the need for a central coordinator, reduce the synchronization wait, and allow some agents to compute faster and execute more iterations. In the asynchronous setting, the only known algorithms for solving this problem could achieve an either linear or sublinear rate of convergence. In this paper, we build upon the existing literature to develop and analyze an asynchronous Newton-based method to solve a penalized version of the problem. We show that this algorithm guarantees almost sure convergence with a global linear and local quadratic rate in expectation. Numerical studies confirm the superior performance of our algorithm against other asynchronous methods.
AB - One of the most important problems in the field of distributed optimization is the problem of minimizing a sum of local convex objective functions over a networked system. Most of the existing work in this area focuses on developing distributed algorithms in a synchronous setting under the presence of a central clock, where the agents need to wait for the slowest one to finish the update, before proceeding to the next iterate. Asynchronous distributed algorithms remove the need for a central coordinator, reduce the synchronization wait, and allow some agents to compute faster and execute more iterations. In the asynchronous setting, the only known algorithms for solving this problem could achieve an either linear or sublinear rate of convergence. In this paper, we build upon the existing literature to develop and analyze an asynchronous Newton-based method to solve a penalized version of the problem. We show that this algorithm guarantees almost sure convergence with a global linear and local quadratic rate in expectation. Numerical studies confirm the superior performance of our algorithm against other asynchronous methods.
KW - Agents and autonomous systems
KW - asynchronous algorithms
KW - network analysis and control
KW - optimization algorithms
UR - http://www.scopus.com/inward/record.url?scp=85088147276&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85088147276&partnerID=8YFLogxK
U2 - 10.1109/TAC.2019.2933607
DO - 10.1109/TAC.2019.2933607
M3 - Article
AN - SCOPUS:85088147276
SN - 0018-9286
VL - 65
SP - 2769
EP - 2784
JO - IEEE Transactions on Automatic Control
JF - IEEE Transactions on Automatic Control
IS - 7
M1 - 8789473
ER -