A constrained risk inequality for general losses

John Duchi, Feng Ruan

Research output: Contribution to journalConference articlepeer-review

Abstract

We provide a general constrained risk inequality that applies to arbitrary non-decreasing losses, extending a result of Brown and Low [Ann. Stat. 1996]. Given two distributions P0 and P1, we find a lower bound for the risk of estimating a parameter θ(P1) under P1 given an upper bound on the risk of estimating the parameter θ(P0) under P0. The inequality is a useful tool, as its proof relies only on the Cauchy-Schwartz inequality, it applies to general losses, including optimality gaps in stochastic convex optimization, and it transparently gives risk lower bounds on super-efficient and adaptive estimators.

Original languageEnglish (US)
Pages (from-to)802-810
Number of pages9
JournalProceedings of Machine Learning Research
Volume130
StatePublished - 2021
Event24th International Conference on Artificial Intelligence and Statistics, AISTATS 2021 - Virtual, Online, United States
Duration: Apr 13 2021Apr 15 2021

ASJC Scopus subject areas

  • Artificial Intelligence
  • Software
  • Control and Systems Engineering
  • Statistics and Probability

Fingerprint

Dive into the research topics of 'A constrained risk inequality for general losses'. Together they form a unique fingerprint.

Cite this