Online allocation and pricing: Constant regret via bellman inequalities

Alberto Vera, Siddhartha Banerjee, Itai Gurvich

Research output: Contribution to journalArticlepeer-review

3 Scopus citations


We develop a framework for designing simple and efficient policies for a family of online allocation and pricing problems that includes online packing, budget-constrained probing, dynamic pricing, and online contextual bandits with knapsacks. In each case, we evaluate the performance of our policies in terms of their regret (i.e., additive gap) relative to an offline controller that is endowed with more information than the online controller.Our framework is based on Bellman inequalities, which decompose the loss of an algorithm into two distinct sources of error: (1) arising fromcomputational tractability issues, and (2) arising from estimation/prediction of random trajectories. Balancing these errors guides the choice of benchmarks, and leads to policies that are both tractable and have strong performance guarantees. In particular, in all our examples, we demonstrate constant-regret policies that only require resolving a linear program in each period, followed by a simple greedy action-selection rule; thus, our policies are practical as well as provably near optimal.

Original languageEnglish (US)
Pages (from-to)821-840
Number of pages20
JournalOperations Research
Issue number3
StatePublished - May 1 2021


  • Approximate dynamic programming
  • Dynamic pricing
  • Network revenue management
  • Online packing
  • Online resource allocation
  • Stochastic optimization

ASJC Scopus subject areas

  • Computer Science Applications
  • Management Science and Operations Research


Dive into the research topics of 'Online allocation and pricing: Constant regret via bellman inequalities'. Together they form a unique fingerprint.

Cite this