TY - JOUR
T1 - Online allocation and pricing
T2 - Constant regret via bellman inequalities
AU - Vera, Alberto
AU - Banerjee, Siddhartha
AU - Gurvich, Itai
N1 - Funding Information:
Funding: This work was supported by the U.S. Department of Defense [Grants STTR A18B-T007 and W911NF-20-C-0008], the National Science Foundation [Grants CNS-1955997, DMS-1839346, and ECCS-1847393], and the Army Research Laboratory [Grant W911NF-17-1-0094]. Supplemental Material: The online appendices are available at https://doi.org/10.1287/opre.2020.2061.
Publisher Copyright:
© 2021 INFORMS.
PY - 2021/5/1
Y1 - 2021/5/1
N2 - We develop a framework for designing simple and efficient policies for a family of online allocation and pricing problems that includes online packing, budget-constrained probing, dynamic pricing, and online contextual bandits with knapsacks. In each case, we evaluate the performance of our policies in terms of their regret (i.e., additive gap) relative to an offline controller that is endowed with more information than the online controller.Our framework is based on Bellman inequalities, which decompose the loss of an algorithm into two distinct sources of error: (1) arising fromcomputational tractability issues, and (2) arising from estimation/prediction of random trajectories. Balancing these errors guides the choice of benchmarks, and leads to policies that are both tractable and have strong performance guarantees. In particular, in all our examples, we demonstrate constant-regret policies that only require resolving a linear program in each period, followed by a simple greedy action-selection rule; thus, our policies are practical as well as provably near optimal.
AB - We develop a framework for designing simple and efficient policies for a family of online allocation and pricing problems that includes online packing, budget-constrained probing, dynamic pricing, and online contextual bandits with knapsacks. In each case, we evaluate the performance of our policies in terms of their regret (i.e., additive gap) relative to an offline controller that is endowed with more information than the online controller.Our framework is based on Bellman inequalities, which decompose the loss of an algorithm into two distinct sources of error: (1) arising fromcomputational tractability issues, and (2) arising from estimation/prediction of random trajectories. Balancing these errors guides the choice of benchmarks, and leads to policies that are both tractable and have strong performance guarantees. In particular, in all our examples, we demonstrate constant-regret policies that only require resolving a linear program in each period, followed by a simple greedy action-selection rule; thus, our policies are practical as well as provably near optimal.
KW - Approximate dynamic programming
KW - Dynamic pricing
KW - Network revenue management
KW - Online packing
KW - Online resource allocation
KW - Stochastic optimization
UR - http://www.scopus.com/inward/record.url?scp=85109134485&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85109134485&partnerID=8YFLogxK
U2 - 10.1287/OPRE.2020.2061
DO - 10.1287/OPRE.2020.2061
M3 - Article
AN - SCOPUS:85109134485
SN - 0030-364X
VL - 69
SP - 821
EP - 840
JO - Operations Research
JF - Operations Research
IS - 3
ER -