Provably Efficient Generalized Lagrangian Policy Optimization for Safe Multi-Agent Reinforcement Learning

Dongsheng Ding, Xiaohan Wei, Zhuoran Yang, Zhaoran Wang, Mihailo R. Jovanović

Research output: Contribution to journalConference articlepeer-review

5 Scopus citations

Abstract

We examine online safe multi-agent reinforcement learning using constrained Markov games in which agents compete by maximizing their expected total rewards under a constraint on expected total utilities. Our focus is confined to an episodic two-player zero-sum constrained Markov game with independent transition functions that are unknown to agents, adversarial reward functions, and stochastic utility functions. For such a Markov game, we employ an approach based on the occupancy measure to formulate it as an online constrained saddle-point problem with an explicit constraint. We extend the Lagrange multiplier method in constrained optimization to handle the constraint by creating a generalized Lagrangian with minimax decision primal variables and a dual variable. Next, we develop an upper confidence reinforcement learning algorithm to solve this Lagrangian problem while balancing exploration and exploitation. Our algorithm updates the minimax decision primal variables via online mirror descent and the dual variable via projected gradient step and we prove that it enjoys sublinear rate O((|X| + |Y |)LpT(|A| + |B|))) for both regret and constraint violation after playing T episodes of the game. Here, L is the horizon of each episode, (|X|, |A|) and (|Y |, |B|) are the state/action space sizes of the min-player and the max-player, respectively. To the best of our knowledge, we provide the first provably efficient online safe reinforcement learning algorithm in constrained Markov games.

Original languageEnglish (US)
Pages (from-to)315-332
Number of pages18
JournalProceedings of Machine Learning Research
Volume211
StatePublished - 2023
Event5th Annual Conference on Learning for Dynamics and Control, L4DC 2023 - Philadelphia, United States
Duration: Jun 15 2023Jun 16 2023

Funding

The work of D. Ding and M. R. Jovanović was supported in part by the National Science Foundation under awards ECCS-1708906 and 1809833. Part of this work was done while D. Ding was with the University of Southern California. We also thank NeurIPS 2022 reviewers for providing helpful comments.

Keywords

  • constrained Markov game
  • generalized Lagrange multiplier method
  • online mirror descent
  • safe multi-agent reinforcement learning
  • upper confidence reinforcement learning

ASJC Scopus subject areas

  • Artificial Intelligence
  • Software
  • Control and Systems Engineering
  • Statistics and Probability

Fingerprint

Dive into the research topics of 'Provably Efficient Generalized Lagrangian Policy Optimization for Safe Multi-Agent Reinforcement Learning'. Together they form a unique fingerprint.

Cite this