Scalable deep reinforcement learning algorithms for mean field games

M Laurière, S Perrin, S Girgin, P Muller… - International …, 2022 - proceedings.mlr.press
Abstract Mean Field Games (MFGs) have been introduced to efficiently approximate games
with very large populations of strategic agents. Recently, the question of learning equilibria …

Approximately solving mean field games via entropy-regularized deep reinforcement learning

K Cui, H Koeppl - International Conference on Artificial …, 2021 - proceedings.mlr.press
The recent mean field game (MFG) formalism facilitates otherwise intractable computation of
approximate Nash equilibria in many-agent settings. In this paper, we consider discrete-time …

[PDF][PDF] Learning mean field games: A survey

M Laurière, S Perrin, M Geist… - arxiv preprint arxiv …, 2022 - researchgate.net
Non-cooperative and cooperative games with a very large number of players have many
applications but remain generally intractable when the number of players increases …

Policy mirror ascent for efficient and independent learning in mean field games

B Yardim, S Cayci, M Geist… - … Conference on Machine …, 2023 - proceedings.mlr.press
Mean-field games have been used as a theoretical tool to obtain an approximate Nash
equilibrium for symmetric and anonymous $ N $-player games. However, limiting …

Concave utility reinforcement learning: The mean-field game viewpoint

M Geist, J Pérolat, M Laurière, R Elie, S Perrin… - arxiv preprint arxiv …, 2021 - arxiv.org
Concave Utility Reinforcement Learning (CURL) extends RL from linear to concave utilities
in the occupancy measure induced by the agent's policy. This encompasses not only RL but …

Multi-player zero-sum Markov games with networked separable interactions

C Park, K Zhang, A Ozdaglar - Advances in Neural …, 2024 - proceedings.neurips.cc
We study a new class of Markov games,\textit {(multi-player) zero-sum Markov Games} with
{\it Networked separable interactions}(zero-sum NMGs), to model the local interaction …

Model-free mean-field reinforcement learning: mean-field MDP and mean-field Q-learning

R Carmona, M Laurière, Z Tan - The Annals of Applied Probability, 2023 - projecteuclid.org
We study infinite horizon discounted mean field control (MFC) problems with common noise
through the lens of mean field Markov decision processes (MFMDP). We allow the agents to …

Scaling up mean field games with online mirror descent

J Perolat, S Perrin, R Elie, M Laurière… - arxiv preprint arxiv …, 2021 - arxiv.org
We address scaling up equilibrium computation in Mean Field Games (MFGs) using Online
Mirror Descent (OMD). We show that continuous-time OMD provably converges to a Nash …

Learning while playing in mean-field games: Convergence and optimality

Q **e, Z Yang, Z Wang, A Minca - … Conference on Machine …, 2021 - proceedings.mlr.press
We study reinforcement learning in mean-field games. To achieve the Nash equilibrium,
which consists of a policy and a mean-field state, existing algorithms require obtaining the …

Unified reinforcement Q-learning for mean field game and control problems

A Angiuli, JP Fouque, M Laurière - Mathematics of Control, Signals, and …, 2022 - Springer
Abstract We present a Reinforcement Learning (RL) algorithm to solve infinite horizon
asymptotic Mean Field Game (MFG) and Mean Field Control (MFC) problems. Our approach …