Feedback-based tree search for reinforcement learning

Daniel R. Jiang*, Emmanuel Ekwedike, Han Liu

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contribution

5 Scopus citations

Abstract

Inspired by recent successes of Monte-Carlo tree search (MCTS) in a number of artificial intelligence (AI) application domains, we propose a reinforcement learning (RL) technique that iteratively applies MCTS on batches of small, finitehorizon versions of the original infinite-horizon Markov decision process. The terminal condition of the finite-horizon problems, or the leaf-node evaluator of the decision tree generated by MCTS, is specified using a combination of an estimated value function and an estimated policy function. The recommendations generated by the MCTS procedure are then provided as feedback in order to refine, through classification and regression, the leaf-node evaluator for the next iteration. We provide the first sample complexity bounds for a tree search-based RL algorithm. In addition, we show that a deep neural network implementation of the technique can create a competitive AI agent for the popular multi-player online battle arena (MOBA) game King of Glory.

Original languageEnglish (US)
Title of host publication35th International Conference on Machine Learning, ICML 2018
EditorsJennifer Dy, Andreas Krause
PublisherInternational Machine Learning Society (IMLS)
Pages3572-3590
Number of pages19
ISBN (Electronic)9781510867963
StatePublished - 2018
Event35th International Conference on Machine Learning, ICML 2018 - Stockholm, Sweden
Duration: Jul 10 2018Jul 15 2018

Publication series

Name35th International Conference on Machine Learning, ICML 2018
Volume5

Other

Other35th International Conference on Machine Learning, ICML 2018
Country/TerritorySweden
CityStockholm
Period7/10/187/15/18

Funding

We wish to thank four anonymous reviewers, whose feedback helped to significantly improve the paper. We also thank our colleagues at Tencent AI Lab, particularly Carson Eisenach and Xiangru Lian, for technical help. Daniel Jiang is grateful for the support from Tencent AI Lab through a faculty award. The research of Han Liu was supported by NSF CAREER Award DMS1454377, NSF IIS1408910, and NSF IIS1332109. This material is also based upon work supported by the National Science Foundation under grant no. 1740762 "Collaborative Research: TRIPODS Institute for Optimization and Learning."

ASJC Scopus subject areas

  • Computational Theory and Mathematics
  • Human-Computer Interaction
  • Software

Fingerprint

Dive into the research topics of 'Feedback-based tree search for reinforcement learning'. Together they form a unique fingerprint.

Cite this