Abstract
Distributed convex optimization algorithms employ a variety of methods for achieving exact convergence to the global optimal value (modulo numerical precision): some use time-varying dynamics, some use dynamics on each edge rather than on each node of the communication graph, some use double the communication between nodes per optimization step, and some use a specific initialization that enforces the dynamics to evolve on a particular subspace. Each of these methods has its drawbacks. Using time-varying dynamics might require a global clock, and it might cause transients due to disturbances to take longer and longer to die out as time progresses. Using edge-based rather than node-based dynamics might increase the memory and computation costs, and it typically requires the communication graph to be undirected. Using double the communication per optimization step might increase the communication cost, and it might also slow down the convergence to the optimal value. Using a specific initialization to enforce a particular invariant subspace might render the algorithm unable to recover from faults or disturbances that perturb its dynamics from this subspace, resulting in convergence to the incorrect value. In this latter case we say that the algorithm is not self-healing. In our previous work [1] we considered strongly convex objective functions having Lipschitz continuous gradients, and we introduced a new first-order method for achieving exact convergence to the global optimal value. Our new algorithm has none of the above drawbacks, and in particular it is a self-healing algorithm. But, it does possess a peculiar internal instability: each node has states that grows linearly in time even as its output converges to the optimal value. In the present work, we consider general non-smooth and extended-real-valued convex objective functions (which can thus incorporate hard convex constraints). We present proximal algorithms that again employ our new, internally unstable method for achieving exact convergence.
Original language | English (US) |
---|---|
Title of host publication | 2022 58th Annual Allerton Conference on Communication, Control, and Computing, Allerton 2022 |
Publisher | Institute of Electrical and Electronics Engineers Inc. |
ISBN (Electronic) | 9798350399981 |
DOIs | |
State | Published - 2022 |
Event | 58th Annual Allerton Conference on Communication, Control, and Computing, Allerton 2022 - Monticello, United States Duration: Sep 27 2022 → Sep 30 2022 |
Publication series
Name | 2022 58th Annual Allerton Conference on Communication, Control, and Computing, Allerton 2022 |
---|
Conference
Conference | 58th Annual Allerton Conference on Communication, Control, and Computing, Allerton 2022 |
---|---|
Country/Territory | United States |
City | Monticello |
Period | 9/27/22 → 9/30/22 |
Funding
This work was supported in part by a grant from the National Science Foundation (CMMI-2024774).
Keywords
- convex optimization
- distributed optimization
- proximal methods
ASJC Scopus subject areas
- Artificial Intelligence
- Computer Networks and Communications
- Computer Science Applications
- Computer Vision and Pattern Recognition
- Signal Processing
- Control and Optimization