TY - JOUR
T1 - Risk-sensitive reinforcement learning
T2 - Near-optimal risk-sample tradeoff in regret
AU - Fei, Yingjie
AU - Yang, Zhuoran
AU - Chen, Yudong
AU - Wang, Zhaoran
AU - Xie, Qiaomin
N1 - Publisher Copyright:
Copyright © 2020, The Authors. All rights reserved.
Copyright:
Copyright 2020 Elsevier B.V., All rights reserved.
PY - 2020/6/22
Y1 - 2020/6/22
N2 - We study risk-sensitive reinforcement learning in episodic Markov decision processes with unknown transition kernels, where the goal is to optimize the total reward under the risk measure of exponential utility. We propose two provably efficient model-free algorithms, Risk-Sensitive Value Iteration (RSVI) and Risk-Sensitive Q-learning (RSQ). These algorithms implement a form of risk-sensitive optimism in the face of uncertainty, which adapts to both risk-seeking and risk-averse modes of exploration. We prove that RSVI attains an Õ(λ(|β|H2) · √H3S2 AT) regret, while RSQ attains an Õ(λ(|β|H2) · √H4SAT) regret, where λ(u) = (e3u − 1)/u for u > 0. In the above, β is the risk parameter of the exponential utility function, S the number of states, A the number of actions, T the total number of timesteps, and H the episode length. On the flip side, we establish a regret lower bound showing that the exponential dependence on |β| and H is unavoidable for any algorithm with an Õ(√T) regret (even when the risk objective is on the same scale as the original reward), thus certifying the near-optimality of the proposed algorithms. Our results demonstrate that incorporating risk awareness into reinforcement learning necessitates an exponential cost in |β| and H, which quantifies the fundamental tradeoff between risk sensitivity (related to aleatoric uncertainty) and sample efficiency (related to epistemic uncertainty). To the best of our knowledge, this is the first regret analysis of risk-sensitive reinforcement learning with the exponential utility.
AB - We study risk-sensitive reinforcement learning in episodic Markov decision processes with unknown transition kernels, where the goal is to optimize the total reward under the risk measure of exponential utility. We propose two provably efficient model-free algorithms, Risk-Sensitive Value Iteration (RSVI) and Risk-Sensitive Q-learning (RSQ). These algorithms implement a form of risk-sensitive optimism in the face of uncertainty, which adapts to both risk-seeking and risk-averse modes of exploration. We prove that RSVI attains an Õ(λ(|β|H2) · √H3S2 AT) regret, while RSQ attains an Õ(λ(|β|H2) · √H4SAT) regret, where λ(u) = (e3u − 1)/u for u > 0. In the above, β is the risk parameter of the exponential utility function, S the number of states, A the number of actions, T the total number of timesteps, and H the episode length. On the flip side, we establish a regret lower bound showing that the exponential dependence on |β| and H is unavoidable for any algorithm with an Õ(√T) regret (even when the risk objective is on the same scale as the original reward), thus certifying the near-optimality of the proposed algorithms. Our results demonstrate that incorporating risk awareness into reinforcement learning necessitates an exponential cost in |β| and H, which quantifies the fundamental tradeoff between risk sensitivity (related to aleatoric uncertainty) and sample efficiency (related to epistemic uncertainty). To the best of our knowledge, this is the first regret analysis of risk-sensitive reinforcement learning with the exponential utility.
UR - http://www.scopus.com/inward/record.url?scp=85095007846&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85095007846&partnerID=8YFLogxK
M3 - Article
AN - SCOPUS:85095007846
JO - Free Radical Biology and Medicine
JF - Free Radical Biology and Medicine
SN - 0891-5849
ER -