Proximal Methods for Self-Healing and Exact Distributed Convex Optimization Extended abstract

Randy A. Freeman*

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Distributed convex optimization algorithms employ a variety of methods for achieving exact convergence to the global optimal value (modulo numerical precision): some use time-varying dynamics, some use dynamics on each edge rather than on each node of the communication graph, some use double the communication between nodes per optimization step, and some use a specific initialization that enforces the dynamics to evolve on a particular subspace. Each of these methods has its drawbacks. Using time-varying dynamics might require a global clock, and it might cause transients due to disturbances to take longer and longer to die out as time progresses. Using edge-based rather than node-based dynamics might increase the memory and computation costs, and it typically requires the communication graph to be undirected. Using double the communication per optimization step might increase the communication cost, and it might also slow down the convergence to the optimal value. Using a specific initialization to enforce a particular invariant subspace might render the algorithm unable to recover from faults or disturbances that perturb its dynamics from this subspace, resulting in convergence to the incorrect value. In this latter case we say that the algorithm is not self-healing. In our previous work [1] we considered strongly convex objective functions having Lipschitz continuous gradients, and we introduced a new first-order method for achieving exact convergence to the global optimal value. Our new algorithm has none of the above drawbacks, and in particular it is a self-healing algorithm. But, it does possess a peculiar internal instability: each node has states that grows linearly in time even as its output converges to the optimal value. In the present work, we consider general non-smooth and extended-real-valued convex objective functions (which can thus incorporate hard convex constraints). We present proximal algorithms that again employ our new, internally unstable method for achieving exact convergence.

Original languageEnglish (US)
Title of host publication2022 58th Annual Allerton Conference on Communication, Control, and Computing, Allerton 2022
PublisherInstitute of Electrical and Electronics Engineers Inc.
ISBN (Electronic)9798350399981
DOIs
StatePublished - 2022
Event58th Annual Allerton Conference on Communication, Control, and Computing, Allerton 2022 - Monticello, United States
Duration: Sep 27 2022Sep 30 2022

Publication series

Name2022 58th Annual Allerton Conference on Communication, Control, and Computing, Allerton 2022

Conference

Conference58th Annual Allerton Conference on Communication, Control, and Computing, Allerton 2022
Country/TerritoryUnited States
CityMonticello
Period9/27/229/30/22

Funding

This work was supported in part by a grant from the National Science Foundation (CMMI-2024774).

Keywords

  • convex optimization
  • distributed optimization
  • proximal methods

ASJC Scopus subject areas

  • Artificial Intelligence
  • Computer Networks and Communications
  • Computer Science Applications
  • Computer Vision and Pattern Recognition
  • Signal Processing
  • Control and Optimization

Fingerprint

Dive into the research topics of 'Proximal Methods for Self-Healing and Exact Distributed Convex Optimization Extended abstract'. Together they form a unique fingerprint.

Cite this