TY - GEN
T1 - On the Convergence of NEAR-DGD for Nonconvex Optimization with Second Order Guarantees
AU - Iakovidou, Charikleia
AU - Wei, Ermin
N1 - Funding Information:
This work was supported by NSF NRI 2024774.
Publisher Copyright:
© 2021 IEEE.
PY - 2021
Y1 - 2021
N2 - We consider the setting where the nodes in an undirected, connected network collaborate to solve a shared objective modeled as the sum of smooth functions. We assume that each summand is privately known by a unique node. NEAR-DGD is a distributed first order method which permits adjusting the amount of communication between nodes relative to the amount of computation performed locally in order to balance convergence accuracy and total application cost. In this work, we generalize the convergence properties of a variant of NEAR-DGD from the strongly convex to the nonconvex case. Under mild assumptions, we show convergence to minimizers of a custom Lyapunov function. Moreover, we demonstrate that the gap between those minimizers and the second order stationary solutions of the original problem can become arbitrarily small depending on the choice of algorithm parameters. Finally, we accompany our theoretical analysis with a numerical experiment to evaluate the empirical performance of NEAR-DGD in the nonconvex setting.
AB - We consider the setting where the nodes in an undirected, connected network collaborate to solve a shared objective modeled as the sum of smooth functions. We assume that each summand is privately known by a unique node. NEAR-DGD is a distributed first order method which permits adjusting the amount of communication between nodes relative to the amount of computation performed locally in order to balance convergence accuracy and total application cost. In this work, we generalize the convergence properties of a variant of NEAR-DGD from the strongly convex to the nonconvex case. Under mild assumptions, we show convergence to minimizers of a custom Lyapunov function. Moreover, we demonstrate that the gap between those minimizers and the second order stationary solutions of the original problem can become arbitrarily small depending on the choice of algorithm parameters. Finally, we accompany our theoretical analysis with a numerical experiment to evaluate the empirical performance of NEAR-DGD in the nonconvex setting.
UR - http://www.scopus.com/inward/record.url?scp=85126049845&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85126049845&partnerID=8YFLogxK
U2 - 10.1109/CDC45484.2021.9683777
DO - 10.1109/CDC45484.2021.9683777
M3 - Conference contribution
AN - SCOPUS:85126049845
T3 - Proceedings of the IEEE Conference on Decision and Control
SP - 259
EP - 264
BT - 60th IEEE Conference on Decision and Control, CDC 2021
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 60th IEEE Conference on Decision and Control, CDC 2021
Y2 - 13 December 2021 through 17 December 2021
ER -