NESTT: A nonconvex primal-dual splitting method for distributed and stochastic optimization

Davood Hajinezhad, Mingyi Hong, Tuo Zhao, Zhaoran Wang

Research output: Contribution to journalConference article

13 Citations (Scopus)

Abstract

We study a stochastic and distributed algorithm for nonconvex problems whose objective consists of a sum of N nonconvex Li/N-smooth functions, plus a non-smooth regularizer. The proposed NonconvEx primal-dual SpliTTing (NESTT) algorithm splits the problem into N subproblems, and utilizes an augmented Lagrangian based primal-dual scheme to solve it in a distributed and stochastic manner. With a special non-uniform sampling, a version of NESTT achieves ϵ-stationary solution using O((ΣN i=1 √Li/N)2/ϵ) gradient evaluations, which can be up to O(N) times better than the (proximal) gradient descent methods. It also achieves Q-linear convergence rate for nonconvex l1 penalized quadratic problems with polyhedral constraints. Further, we reveal a fundamental connection between primal-dual based methods and a few primal only methods such as IAG/SAG/SAGA.

Original languageEnglish (US)
Pages (from-to)3215-3223
Number of pages9
JournalAdvances in Neural Information Processing Systems
StatePublished - Jan 1 2016
Event30th Annual Conference on Neural Information Processing Systems, NIPS 2016 - Barcelona, Spain
Duration: Dec 5 2016Dec 10 2016

Fingerprint

Parallel algorithms
Sampling

ASJC Scopus subject areas

  • Computer Networks and Communications
  • Information Systems
  • Signal Processing

Cite this

@article{6530c777fe7d40b9aed75cce8f1ff2c0,
title = "NESTT: A nonconvex primal-dual splitting method for distributed and stochastic optimization",
abstract = "We study a stochastic and distributed algorithm for nonconvex problems whose objective consists of a sum of N nonconvex Li/N-smooth functions, plus a non-smooth regularizer. The proposed NonconvEx primal-dual SpliTTing (NESTT) algorithm splits the problem into N subproblems, and utilizes an augmented Lagrangian based primal-dual scheme to solve it in a distributed and stochastic manner. With a special non-uniform sampling, a version of NESTT achieves ϵ-stationary solution using O((ΣN i=1 √Li/N)2/ϵ) gradient evaluations, which can be up to O(N) times better than the (proximal) gradient descent methods. It also achieves Q-linear convergence rate for nonconvex l1 penalized quadratic problems with polyhedral constraints. Further, we reveal a fundamental connection between primal-dual based methods and a few primal only methods such as IAG/SAG/SAGA.",
author = "Davood Hajinezhad and Mingyi Hong and Tuo Zhao and Zhaoran Wang",
year = "2016",
month = "1",
day = "1",
language = "English (US)",
pages = "3215--3223",
journal = "Advances in Neural Information Processing Systems",
issn = "1049-5258",

}

NESTT : A nonconvex primal-dual splitting method for distributed and stochastic optimization. / Hajinezhad, Davood; Hong, Mingyi; Zhao, Tuo; Wang, Zhaoran.

In: Advances in Neural Information Processing Systems, 01.01.2016, p. 3215-3223.

Research output: Contribution to journalConference article

TY - JOUR

T1 - NESTT

T2 - A nonconvex primal-dual splitting method for distributed and stochastic optimization

AU - Hajinezhad, Davood

AU - Hong, Mingyi

AU - Zhao, Tuo

AU - Wang, Zhaoran

PY - 2016/1/1

Y1 - 2016/1/1

N2 - We study a stochastic and distributed algorithm for nonconvex problems whose objective consists of a sum of N nonconvex Li/N-smooth functions, plus a non-smooth regularizer. The proposed NonconvEx primal-dual SpliTTing (NESTT) algorithm splits the problem into N subproblems, and utilizes an augmented Lagrangian based primal-dual scheme to solve it in a distributed and stochastic manner. With a special non-uniform sampling, a version of NESTT achieves ϵ-stationary solution using O((ΣN i=1 √Li/N)2/ϵ) gradient evaluations, which can be up to O(N) times better than the (proximal) gradient descent methods. It also achieves Q-linear convergence rate for nonconvex l1 penalized quadratic problems with polyhedral constraints. Further, we reveal a fundamental connection between primal-dual based methods and a few primal only methods such as IAG/SAG/SAGA.

AB - We study a stochastic and distributed algorithm for nonconvex problems whose objective consists of a sum of N nonconvex Li/N-smooth functions, plus a non-smooth regularizer. The proposed NonconvEx primal-dual SpliTTing (NESTT) algorithm splits the problem into N subproblems, and utilizes an augmented Lagrangian based primal-dual scheme to solve it in a distributed and stochastic manner. With a special non-uniform sampling, a version of NESTT achieves ϵ-stationary solution using O((ΣN i=1 √Li/N)2/ϵ) gradient evaluations, which can be up to O(N) times better than the (proximal) gradient descent methods. It also achieves Q-linear convergence rate for nonconvex l1 penalized quadratic problems with polyhedral constraints. Further, we reveal a fundamental connection between primal-dual based methods and a few primal only methods such as IAG/SAG/SAGA.

UR - http://www.scopus.com/inward/record.url?scp=85018999760&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85018999760&partnerID=8YFLogxK

M3 - Conference article

AN - SCOPUS:85018999760

SP - 3215

EP - 3223

JO - Advances in Neural Information Processing Systems

JF - Advances in Neural Information Processing Systems

SN - 1049-5258

ER -