Learning to optimize via posterior sampling

Daniel Russo*, Benjamin Van Roy

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

237 Scopus citations

Abstract

This paper considers the use of a simple posterior sampling algorithm to balance between exploration and exploitation when learning to optimize actions such as in multiarmed bandit problems. The algorithm, also known as Thompson Sampling and as probability matching, offers significant advantages over the popular upper confidence bound (UCB) approach, and can be applied to problems with finite or infinite action spaces and complicated relationships among action rewards. We make two theoretical contributions. The first establishes a connection between posterior sampling and UCB algorithms. This result lets us convert regret bounds developed for UCB algorithms into Bayesian regret bounds for posterior sampling. Our second theoretical contribution is a Bayesian regret bound for posterior sampling that applies broadly and can be specialized to many model classes. This bound depends on a new notion we refer to as the eluder dimension, which measures the degree of dependence among action rewards. Compared to UCB algorithm Bayesian regret bounds for specific model classes, our general bound matches the best available for linear models and is stronger than the best available for generalized linear models. Further, our analysis provides insight into performance advantages of posterior sampling, which are highlighted through simulation results that demonstrate performance surpassing recently proposed UCB algorithms.

Original languageEnglish (US)
Pages (from-to)1221-1243
Number of pages23
JournalMathematics of Operations Research
Volume39
Issue number4
DOIs
StatePublished - Nov 1 2014

Keywords

  • Multiarmed bandits
  • Online optimization
  • Thompson sampling

ASJC Scopus subject areas

  • Mathematics(all)
  • Computer Science Applications
  • Management Science and Operations Research

Fingerprint

Dive into the research topics of 'Learning to optimize via posterior sampling'. Together they form a unique fingerprint.

Cite this