TY - CONF
T1 - TOWARDS GENERAL FUNCTION APPROXIMATION IN ZERO-SUM MARKOV GAMES
AU - Huang, Baihe
AU - Lee, Jason D.
AU - Wang, Zhaoran
AU - Yang, Zhuoran
N1 - Funding Information:
BH is supported by the Elite Undergraduate Training Program of School of Mathematical Sciences at Peking University. JDL acknowledges support of the ARO under MURI Award W911NF-11-1-0304, the Sloan Research Fellowship, NSF CCF 2002272, NSF IIS 2107304, and an ONR Young Investigator Award.
Publisher Copyright:
© 2022 ICLR 2022 - 10th International Conference on Learning Representationss. All rights reserved.
PY - 2022
Y1 - 2022
N2 - This paper considers two-player zero-sum finite-horizon Markov games with simultaneous moves. The study focuses on the challenging settings where the value function or the model is parameterized by general function classes. Provably efficient algorithms for both decoupled and coordinated settings are developed. In the decoupled setting where the agent controls a single player and plays against an arbitrary opponent, we propose a new model-free algorithm. The sample complexity is governed by the Minimax Eluder dimension-a new dimension of the function class in Markov games. As a special case, this method improves the state-of-the-art algorithm by a √d factor in the regret when the reward function and transition kernel are parameterized with d-dimensional linear features. In the coordinated setting where both players are controlled by the agent, we propose a model-based algorithm and a model-free algorithm. In the model-based algorithm, we prove that sample complexity can be bounded by a generalization of Witness rank to Markov games. The model-free algorithm enjoys a √K-regret upper bound where K is the number of episodes.
AB - This paper considers two-player zero-sum finite-horizon Markov games with simultaneous moves. The study focuses on the challenging settings where the value function or the model is parameterized by general function classes. Provably efficient algorithms for both decoupled and coordinated settings are developed. In the decoupled setting where the agent controls a single player and plays against an arbitrary opponent, we propose a new model-free algorithm. The sample complexity is governed by the Minimax Eluder dimension-a new dimension of the function class in Markov games. As a special case, this method improves the state-of-the-art algorithm by a √d factor in the regret when the reward function and transition kernel are parameterized with d-dimensional linear features. In the coordinated setting where both players are controlled by the agent, we propose a model-based algorithm and a model-free algorithm. In the model-based algorithm, we prove that sample complexity can be bounded by a generalization of Witness rank to Markov games. The model-free algorithm enjoys a √K-regret upper bound where K is the number of episodes.
UR - http://www.scopus.com/inward/record.url?scp=85137695562&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85137695562&partnerID=8YFLogxK
M3 - Paper
AN - SCOPUS:85137695562
T2 - 10th International Conference on Learning Representations, ICLR 2022
Y2 - 25 April 2022 through 29 April 2022
ER -