Adaptive sampling strategies for stochastic optimization

Raghu Bollapragada, Richard Byrd, Jorge Nocedal

Research output: Contribution to journalArticlepeer-review

13 Scopus citations

Abstract

In this paper, we propose a stochastic optimization method that adaptively controls the sample size used in the computation of gradient approximations. Unlike other variance reduction techniques that either require additional storage or the regular computation of full gradients, the proposed method reduces variance by increasing the sample size as needed. The decision to increase the sample size is governed by an inner product test that ensures that search directions are descent directions with high probability. We show that the inner product test improves upon the well-known norm test, and can be used as a basis for an algorithm that is globally convergent on nonconvex functions and enjoys a global linear rate of convergence on strongly convex functions. Numerical experiments on logistic regression and nonlinear least squares problems illustrate the performance of the algorithm.

Original languageEnglish (US)
Pages (from-to)3312-3343
Number of pages32
JournalSIAM Journal on Optimization
Volume28
Issue number4
DOIs
StatePublished - 2018

Keywords

  • Machine learning
  • Sample selection
  • Stochastic optimization

ASJC Scopus subject areas

  • Software
  • Theoretical Computer Science

Fingerprint

Dive into the research topics of 'Adaptive sampling strategies for stochastic optimization <sup>∗</sup>'. Together they form a unique fingerprint.

Cite this