Abstract
This paper considers the sample complexity of the multi-armed bandit with dependencies among the arms. Some of the most successful algorithms for this problem use the principle of optimism in the face of uncertainty to guide exploration. The clearest example of this is the class of upper confidence bound (UCB) algorithms, but recent work has shown that a simple posterior sampling algorithm, sometimes called Thompson sampling, can be analyzed in the same manner as optimistic approaches. In this paper, we develop a regret bound that holds for both classes of algorithms. This bound applies broadly and can be specialized to many model classes. It depends on a new notion we refer to as the eluder dimension, which measures the degree of dependence among action rewards. Compared to UCB algorithm regret bounds for specific model classes, our general bound matches the best available for linear models and is stronger than the best available for generalized linear models.
Original language | English (US) |
---|---|
Journal | Advances in Neural Information Processing Systems |
State | Published - 2013 |
Event | 27th Annual Conference on Neural Information Processing Systems, NIPS 2013 - Lake Tahoe, NV, United States Duration: Dec 5 2013 → Dec 10 2013 |
ASJC Scopus subject areas
- Computer Networks and Communications
- Information Systems
- Signal Processing