Parametrized deep Q-networks learning: Reinforcement learning with discrete-continuous hybrid action space

Jiechao Xiong, Qing Wang, Zhuoran Yang, Peng Sun, Lei Han, Yang Zheng, Haobo Fu, Tong Zhang, Ji Liu, Han Liu

Research output: Contribution to journalArticlepeer-review

Abstract

Most existing deep reinforcement learning (DRL) frameworks consider either discrete action space or continuous action space solely. Motivated by applications in computer games, we consider the scenario with discrete-continuous hybrid action space. To handle hybrid action space, previous works either approximate the hybrid space by discretization, or relax it into a continuous set. In this paper, we propose a parametrized deep Q-network (P-DQN) framework for the hybrid action space without approximation or relaxation. Our algorithm combines the spirits of both DQN (dealing with discrete action space) and DDPG (dealing with continuous action space) by seamlessly integrating them. Empirical results on a simulation example, scoring a goal in simulated RoboCup soccer and the solo mode in game King of Glory (KOG) validate the efficiency and effectiveness of our method.

Original languageEnglish (US)
JournalUnknown Journal
StatePublished - Oct 10 2018

ASJC Scopus subject areas

  • General

Fingerprint Dive into the research topics of 'Parametrized deep Q-networks learning: Reinforcement learning with discrete-continuous hybrid action space'. Together they form a unique fingerprint.

Cite this