Minimax Bounds on Stochastic Batched Convex Optimization

John Duchi, Feng Ruan, Chulhee Yun

Research output: Contribution to journalConference articlepeer-review

20 Scopus citations

Abstract

We study the stochastic batched convex optimization problem, in which we use many parallel observations to optimize a convex function given limited rounds of interaction. In each of M rounds, an algorithm may query for information at n points, and after issuing all n queries, it receives unbiased noisy function and/or (sub)gradient evaluations at the n points. After M such rounds, the algorithm must output an estimator. We provide lower and upper bounds on the performance of such batched convex optimization algorithms in zeroth and first-order settings for Lipschitz convex and smooth strongly convex functions. Our rates of convergence (nearly) achieve the fully sequential rate once M = O(d log log n), where d is the problem dimension, but the rates may exponentially degrade as the dimension d increases, in distinction from fully sequential settings.

Original languageEnglish (US)
Pages (from-to)3065-3162
Number of pages98
JournalProceedings of Machine Learning Research
Volume75
StatePublished - 2018
Event31st Annual Conference on Learning Theory, COLT 2018 - Stockholm, Sweden
Duration: Jul 6 2018Jul 9 2018

Keywords

  • batched optimization
  • parallel computing
  • Stochastic convex optimization

ASJC Scopus subject areas

  • Artificial Intelligence
  • Software
  • Control and Systems Engineering
  • Statistics and Probability

Fingerprint

Dive into the research topics of 'Minimax Bounds on Stochastic Batched Convex Optimization'. Together they form a unique fingerprint.

Cite this