On uniform deviations of general empirical risks with unboundedness, dependence, and high dimensionality

Wenxin Jiang*

*Corresponding author for this work

Research output: Contribution to journalArticle

7 Scopus citations

Abstract

The statistical learning theory of risk minimization depends heavily on probability bounds for uniform deviations of the empirical risks. Classical probability bounds using Hoeffding's inequality cannot accommodate more general situations with unbounded loss and dependent data. The current paper introduces an inequality that extends Hoeffding's inequality to handle these more general situations. We will apply this inequality to provide probability bounds for uniform deviations in a very general framework, which can involve discrete decision rules, unbounded loss, and a dependence structure that can be more general than either martingale or strong mixing. We will consider two examples with high dimensional predictors: autoregression (AR) with ℓ1- loss, and ARX model with variable selection for sign classification, which uses both lagged responses and exogenous predictors.

Original languageEnglish (US)
Pages (from-to)977-996
Number of pages20
JournalJournal of Machine Learning Research
Volume10
StatePublished - Jan 1 2009

Keywords

  • Dependence
  • Empirical risk
  • Probability bound
  • Unbounded loss
  • Uniform deviation

ASJC Scopus subject areas

  • Software
  • Control and Systems Engineering
  • Statistics and Probability
  • Artificial Intelligence

Fingerprint Dive into the research topics of 'On uniform deviations of general empirical risks with unboundedness, dependence, and high dimensionality'. Together they form a unique fingerprint.

  • Cite this