Parametric simplex method for sparse learning

Haotian Pang, Robert Vanderbei, Han Liu, Tuo Zhao*

*Corresponding author for this work

Research output: Contribution to journalConference article

Abstract

High dimensional sparse learning has imposed a great computational challenge to large scale data analysis. In this paper, we are interested in a broad class of sparse learning approaches formulated as linear programs parametrized by a regularization factor, and solve them by the parametric simplex method (PSM). Our parametric simplex method offers significant advantages over other competing methods: (1) PSM naturally obtains the complete solution path for all values of the regularization parameter; (2) PSM provides a high precision dual certificate stopping criterion; (3) PSM yields sparse solutions through very few iterations, and the solution sparsity significantly reduces the computational cost per iteration. Particularly, we demonstrate the superiority of PSM over various sparse learning approaches, including Dantzig selector for sparse linear regression, LAD-Lasso for sparse robust linear regression, CLIME for sparse precision matrix estimation, sparse differential network estimation, and sparse Linear Programming Discriminant (LPD) analysis. We then provide sufficient conditions under which PSM always outputs sparse solutions such that its computational performance can be significantly boosted. Thorough numerical experiments are provided to demonstrate the outstanding performance of the PSM method.

Original languageEnglish (US)
Pages (from-to)188-197
Number of pages10
JournalAdvances in Neural Information Processing Systems
Volume2017-December
StatePublished - Jan 1 2017
Event31st Annual Conference on Neural Information Processing Systems, NIPS 2017 - Long Beach, United States
Duration: Dec 4 2017Dec 9 2017

Fingerprint

Linear regression
Discriminant analysis
Linear programming
Costs
Experiments

ASJC Scopus subject areas

  • Computer Networks and Communications
  • Information Systems
  • Signal Processing

Cite this

Pang, H., Vanderbei, R., Liu, H., & Zhao, T. (2017). Parametric simplex method for sparse learning. Advances in Neural Information Processing Systems, 2017-December, 188-197.
Pang, Haotian ; Vanderbei, Robert ; Liu, Han ; Zhao, Tuo. / Parametric simplex method for sparse learning. In: Advances in Neural Information Processing Systems. 2017 ; Vol. 2017-December. pp. 188-197.
@article{1979738bb281443086e6a6c5fa27827b,
title = "Parametric simplex method for sparse learning",
abstract = "High dimensional sparse learning has imposed a great computational challenge to large scale data analysis. In this paper, we are interested in a broad class of sparse learning approaches formulated as linear programs parametrized by a regularization factor, and solve them by the parametric simplex method (PSM). Our parametric simplex method offers significant advantages over other competing methods: (1) PSM naturally obtains the complete solution path for all values of the regularization parameter; (2) PSM provides a high precision dual certificate stopping criterion; (3) PSM yields sparse solutions through very few iterations, and the solution sparsity significantly reduces the computational cost per iteration. Particularly, we demonstrate the superiority of PSM over various sparse learning approaches, including Dantzig selector for sparse linear regression, LAD-Lasso for sparse robust linear regression, CLIME for sparse precision matrix estimation, sparse differential network estimation, and sparse Linear Programming Discriminant (LPD) analysis. We then provide sufficient conditions under which PSM always outputs sparse solutions such that its computational performance can be significantly boosted. Thorough numerical experiments are provided to demonstrate the outstanding performance of the PSM method.",
author = "Haotian Pang and Robert Vanderbei and Han Liu and Tuo Zhao",
year = "2017",
month = "1",
day = "1",
language = "English (US)",
volume = "2017-December",
pages = "188--197",
journal = "Advances in Neural Information Processing Systems",
issn = "1049-5258",

}

Pang, H, Vanderbei, R, Liu, H & Zhao, T 2017, 'Parametric simplex method for sparse learning', Advances in Neural Information Processing Systems, vol. 2017-December, pp. 188-197.

Parametric simplex method for sparse learning. / Pang, Haotian; Vanderbei, Robert; Liu, Han; Zhao, Tuo.

In: Advances in Neural Information Processing Systems, Vol. 2017-December, 01.01.2017, p. 188-197.

Research output: Contribution to journalConference article

TY - JOUR

T1 - Parametric simplex method for sparse learning

AU - Pang, Haotian

AU - Vanderbei, Robert

AU - Liu, Han

AU - Zhao, Tuo

PY - 2017/1/1

Y1 - 2017/1/1

N2 - High dimensional sparse learning has imposed a great computational challenge to large scale data analysis. In this paper, we are interested in a broad class of sparse learning approaches formulated as linear programs parametrized by a regularization factor, and solve them by the parametric simplex method (PSM). Our parametric simplex method offers significant advantages over other competing methods: (1) PSM naturally obtains the complete solution path for all values of the regularization parameter; (2) PSM provides a high precision dual certificate stopping criterion; (3) PSM yields sparse solutions through very few iterations, and the solution sparsity significantly reduces the computational cost per iteration. Particularly, we demonstrate the superiority of PSM over various sparse learning approaches, including Dantzig selector for sparse linear regression, LAD-Lasso for sparse robust linear regression, CLIME for sparse precision matrix estimation, sparse differential network estimation, and sparse Linear Programming Discriminant (LPD) analysis. We then provide sufficient conditions under which PSM always outputs sparse solutions such that its computational performance can be significantly boosted. Thorough numerical experiments are provided to demonstrate the outstanding performance of the PSM method.

AB - High dimensional sparse learning has imposed a great computational challenge to large scale data analysis. In this paper, we are interested in a broad class of sparse learning approaches formulated as linear programs parametrized by a regularization factor, and solve them by the parametric simplex method (PSM). Our parametric simplex method offers significant advantages over other competing methods: (1) PSM naturally obtains the complete solution path for all values of the regularization parameter; (2) PSM provides a high precision dual certificate stopping criterion; (3) PSM yields sparse solutions through very few iterations, and the solution sparsity significantly reduces the computational cost per iteration. Particularly, we demonstrate the superiority of PSM over various sparse learning approaches, including Dantzig selector for sparse linear regression, LAD-Lasso for sparse robust linear regression, CLIME for sparse precision matrix estimation, sparse differential network estimation, and sparse Linear Programming Discriminant (LPD) analysis. We then provide sufficient conditions under which PSM always outputs sparse solutions such that its computational performance can be significantly boosted. Thorough numerical experiments are provided to demonstrate the outstanding performance of the PSM method.

UR - http://www.scopus.com/inward/record.url?scp=85047004988&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85047004988&partnerID=8YFLogxK

M3 - Conference article

AN - SCOPUS:85047004988

VL - 2017-December

SP - 188

EP - 197

JO - Advances in Neural Information Processing Systems

JF - Advances in Neural Information Processing Systems

SN - 1049-5258

ER -