A communication-efficient multi-agent actor-critic algorithm for distributed reinforcement learning

Yixuan Lin, Kaiqing Zhang, Zhuoran Yang, Zhaoran Wang, Tamer Başar, Romeil Sandhu, Ji Liu

Research output: Contribution to journalArticlepeer-review

Abstract

This paper considers a distributed reinforcement learning problem in which a network of multiple agents aim to cooperatively maximize the globally averaged return through communication with only local neighbors. A randomized communication-efficient multi-agent actor-critic algorithm is proposed for possibly unidirectional communication relationships depicted by a directed graph. It is shown that the algorithm can solve the problem for strongly connected graphs by allowing each agent to transmit only two scalar-valued variables at one time.

Original languageEnglish (US)
JournalUnknown Journal
StatePublished - Jul 5 2019

ASJC Scopus subject areas

  • General

Fingerprint Dive into the research topics of 'A communication-efficient multi-agent actor-critic algorithm for distributed reinforcement learning'. Together they form a unique fingerprint.

Cite this