Abstract
We consider the stochastic contextual bandit problem under the high dimensional linear model. We focus on the case where the action space is finite and random, with each action associated with a randomly generated contextual covariate. This setting finds essential applications such as personalized recommendations, online advertisements, and personalized medicine. However, it is very challenging to balance the exploration and exploitation tradeoff. We modify the LinUCB algorithm in doubly growing epochs and estimate the parameter using the best subset selection method, which is easy to implement in practice. This approach achieves (Formula presented.) regret with high probability, which is nearly independent of the “ambient” regression model dimension d. We further attain a sharper (Formula presented.) regret by using the SupLinUCB framework and match the minimax lower bound of the low-dimensional linear stochastic bandit problem. Finally, we conduct extensive numerical experiments to empirically demonstrate our algorithms’ applicability and robustness. Supplementary materials for this article are available online.
Original language | English (US) |
---|---|
Journal | Journal of the American Statistical Association |
DOIs | |
State | Accepted/In press - 2022 |
Keywords
- Best subset selection
- High-dimensional models
- Regret analysis
- Stochastic bandit
ASJC Scopus subject areas
- Statistics and Probability
- Statistics, Probability and Uncertainty