Multi-agent reinforcement learning: A selective overview of theories and algorithms
Recent years have witnessed significant advances in reinforcement learning (RL), which
has registered tremendous success in solving various sequential decision-making problems …
has registered tremendous success in solving various sequential decision-making problems …
An overview of multi-agent reinforcement learning from game theoretical perspective
Y Yang, J Wang - arxiv preprint arxiv:2011.00583, 2020 - arxiv.org
Following the remarkable success of the AlphaGO series, 2019 was a booming year that
witnessed significant advances in multi-agent reinforcement learning (MARL) techniques …
witnessed significant advances in multi-agent reinforcement learning (MARL) techniques …
Understanding and mitigating gradient flow pathologies in physics-informed neural networks
The widespread use of neural networks across different scientific domains often involves
constraining them to satisfy certain symmetries, conservation laws, or other domain …
constraining them to satisfy certain symmetries, conservation laws, or other domain …
On gradient descent ascent for nonconvex-concave minimax problems
We consider nonconvex-concave minimax problems, $\min_ {\mathbf {x}}\max_ {\mathbf
{y}\in\mathcal {Y}} f (\mathbf {x},\mathbf {y}) $, where $ f $ is nonconvex in $\mathbf {x} $ but …
{y}\in\mathcal {Y}} f (\mathbf {x},\mathbf {y}) $, where $ f $ is nonconvex in $\mathbf {x} $ but …
Optimizing millions of hyperparameters by implicit differentiation
We propose an algorithm for inexpensive gradient-based hyperparameter optimization that
combines the implicit function theorem (IFT) with efficient inverse Hessian approximations …
combines the implicit function theorem (IFT) with efficient inverse Hessian approximations …
A survey and critique of multiagent deep reinforcement learning
Deep reinforcement learning (RL) has achieved outstanding results in recent years. This has
led to a dramatic increase in the number of applications and methods. Recent works have …
led to a dramatic increase in the number of applications and methods. Recent works have …
Understanding and mitigating gradient pathologies in physics-informed neural networks
The widespread use of neural networks across different scientific domains often involves
constraining them to satisfy certain symmetries, conservation laws, or other domain …
constraining them to satisfy certain symmetries, conservation laws, or other domain …
Solving a class of non-convex min-max games using iterative first order methods
Recent applications that arise in machine learning have surged significant interest in solving
min-max saddle point games. This problem has been extensively studied in the convex …
min-max saddle point games. This problem has been extensively studied in the convex …
Optimistic mirror descent in saddle-point problems: Going the extra (gradient) mile
Owing to their connection with generative adversarial networks (GANs), saddle-point
problems have recently attracted considerable interest in machine learning and beyond. By …
problems have recently attracted considerable interest in machine learning and beyond. By …
Implicit gradient regularization
DGT Barrett, B Dherin - arxiv preprint arxiv:2009.11162, 2020 - arxiv.org
Gradient descent can be surprisingly good at optimizing deep neural networks without
overfitting and without explicit regularization. We find that the discrete steps of gradient …
overfitting and without explicit regularization. We find that the discrete steps of gradient …