An inexact successive quadratic approximation method for L-1 regularized optimization

Richard H. Byrd, Jorge Nocedal*, Figen Oztoprak

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

37 Scopus citations


We study a Newton-like method for the minimization of an objective function ϕ that is the sum of a smooth function and an ℓ1 regularization term. This method, which is sometimes referred to in the literature as a proximal Newton method, computes a step by minimizing a piecewise quadratic model qk of the objective function ϕ. In order to make this approach efficient in practice, it is imperative to perform this inner minimization inexactly. In this paper, we give inexactness conditions that guarantee global convergence and that can be used to control the local rate of convergence of the iteration. Our inexactness conditions are based on a semi-smooth function that represents a (continuous) measure of the optimality conditions of the problem, and that embodies the soft-thresholding iteration. We give careful consideration to the algorithm employed for the inner minimization, and report numerical results on two test sets originating in machine learning.

Original languageEnglish (US)
Pages (from-to)375-396
Number of pages22
JournalMathematical Programming
Issue number2
StatePublished - Jun 1 2016


  • Inexact proximal Newton
  • Orthant-based quasi-Newton
  • Sparse optimization

ASJC Scopus subject areas

  • Software
  • Mathematics(all)


Dive into the research topics of 'An inexact successive quadratic approximation method for L-1 regularized optimization'. Together they form a unique fingerprint.

Cite this