Abstract
We provide a general constrained risk inequality that applies to arbitrary non-decreasing losses, extending a result of Brown and Low [Ann. Stat. 1996]. Given two distributions P0 and P1, we find a lower bound for the risk of estimating a parameter θ(P1) under P1 given an upper bound on the risk of estimating the parameter θ(P0) under P0. The inequality is a useful tool, as its proof relies only on the Cauchy-Schwartz inequality, it applies to general losses, including optimality gaps in stochastic convex optimization, and it transparently gives risk lower bounds on super-efficient and adaptive estimators.
Original language | English (US) |
---|---|
Pages (from-to) | 802-810 |
Number of pages | 9 |
Journal | Proceedings of Machine Learning Research |
Volume | 130 |
State | Published - 2021 |
Event | 24th International Conference on Artificial Intelligence and Statistics, AISTATS 2021 - Virtual, Online, United States Duration: Apr 13 2021 → Apr 15 2021 |
ASJC Scopus subject areas
- Artificial Intelligence
- Software
- Control and Systems Engineering
- Statistics and Probability