Independent policy gradient for large-scale markov potential games: Sharper rates, function approximation, and game-agnostic convergence
We examine global non-asymptotic convergence properties of policy gradient methods for
multi-agent reinforcement learning (RL) problems in Markov potential games (MPGs). To …
multi-agent reinforcement learning (RL) problems in Markov potential games (MPGs). To …
V-Learning--A Simple, Efficient, Decentralized Algorithm for Multiagent RL
A major challenge of multiagent reinforcement learning (MARL) is the curse of multiagents,
where the size of the joint action space scales exponentially with the number of agents. This …
where the size of the joint action space scales exponentially with the number of agents. This …
The complexity of markov equilibrium in stochastic games
We show that computing approximate stationary Markov coarse correlated equilibria (CCE)
in general-sum stochastic games is PPAD-hard, even when there are two players, the game …
in general-sum stochastic games is PPAD-hard, even when there are two players, the game …
On improving model-free algorithms for decentralized multi-agent reinforcement learning
Multi-agent reinforcement learning (MARL) algorithms often suffer from an exponential
sample complexity dependence on the number of agents, a phenomenon known as the …
sample complexity dependence on the number of agents, a phenomenon known as the …
Breaking the curse of multiagency: Provably efficient decentralized multi-agent rl with function approximation
A unique challenge in Multi-Agent Reinforcement Learning (MARL) is the\emph {curse of
multiagency}, where the description length of the game as well as the complexity of many …
multiagency}, where the description length of the game as well as the complexity of many …
Learning in games: a systematic review
RJ Qin, Y Yu - Science China Information Sciences, 2024 - Springer
Game theory studies the mathematical models for self-interested individuals. Nash
equilibrium is arguably the most central solution in game theory. While finding the Nash …
equilibrium is arguably the most central solution in game theory. While finding the Nash …
When are offline two-player zero-sum Markov games solvable?
We study what dataset assumption permits solving offline two-player zero-sum Markov
games. In stark contrast to the offline single-agent Markov decision process, we show that …
games. In stark contrast to the offline single-agent Markov decision process, we show that …
Multi-player zero-sum Markov games with networked separable interactions
We study a new class of Markov games,\textit {(multi-player) zero-sum Markov Games} with
{\it Networked separable interactions}(zero-sum NMGs), to model the local interaction …
{\it Networked separable interactions}(zero-sum NMGs), to model the local interaction …
Breaking the curse of multiagents in a large state space: Rl in markov games with independent linear function approximation
We propose a new model,\emph {independent linear Markov game}, for multi-agent
reinforcement learning with a large state space and a large number of agents. This is a class …
reinforcement learning with a large state space and a large number of agents. This is a class …
Sample-efficient reinforcement learning of partially observable markov games
This paper considers the challenging tasks of Multi-Agent Reinforcement Learning (MARL)
under partial observability, where each agent only sees her own individual observations and …
under partial observability, where each agent only sees her own individual observations and …