S-NEAR-DGD: A Flexible Distributed Stochastic Gradient Method for Inexact Communication

Charikleia Iakovidou*, Ermin Wei

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review


We present and analyze a stochastic distributed method (S-NEAR-DGD) that can tolerate inexact computation and inaccurate information exchange to alleviate the problems of costly gradient evaluations and bandwidth-limited communication in large-scale systems. Our method is based on a class of flexible, distributed first-order algorithms that allow for the tradeoff of computation and communication to best accommodate the application setting. We assume that the information exchanged between nodes is subject to random distortion and that only stochastic approximations of the true gradients are available. Our theoretical results prove that the proposed algorithm converges linearly in expectation to a neighborhood of the optimal solution for strongly convex objective functions with Lipschitz gradients. We characterize the dependence of this neighborhood on algorithm and network parameters, the quality of the communication channel and the precision of the stochastic gradient approximations used. Finally, we provide numerical results to evaluate the empirical performance of our method.

Original languageEnglish (US)
Pages (from-to)1281-1287
Number of pages7
JournalIEEE Transactions on Automatic Control
Issue number2
StatePublished - Feb 1 2023
Externally publishedYes


  • Distributed optimization
  • network optimization
  • quantization
  • stochastic optimization

ASJC Scopus subject areas

  • Control and Systems Engineering
  • Computer Science Applications
  • Electrical and Electronic Engineering


Dive into the research topics of 'S-NEAR-DGD: A Flexible Distributed Stochastic Gradient Method for Inexact Communication'. Together they form a unique fingerprint.

Cite this