Accelerated Algorithms for Smooth Convex-Concave Minimax Problems with O (1/k^ 2) Rate on Squared Gradient Norm
In this work, we study the computational complexity of reducing the squared gradient
magnitude for smooth minimax optimization problems. First, we present algorithms with …
magnitude for smooth minimax optimization problems. First, we present algorithms with …
Adaptive learning in continuous games: Optimal regret bounds and convergence to nash equilibrium
In game-theoretic learning, several agents are simultaneously following their individual
interests, so the environment is non-stationary from each player's perspective. In this context …
interests, so the environment is non-stationary from each player's perspective. In this context …
AdaGrad avoids saddle points
Adaptive first-order methods in optimization have widespread ML applications due to their
ability to adapt to non-convex landscapes. However, their convergence guarantees are …
ability to adapt to non-convex landscapes. However, their convergence guarantees are …
Solving nonconvex-nonconcave min-max problems exhibiting weak minty solutions
A Böhm - arxiv preprint arxiv:2201.12247, 2022 - arxiv.org
We investigate a structured class of nonconvex-nonconcave min-max problems exhibiting
so-called\emph {weak Minty} solutions, a notion which was only recently introduced, but is …
so-called\emph {weak Minty} solutions, a notion which was only recently introduced, but is …
Nest your adaptive algorithm for parameter-agnostic nonconvex minimax optimization
Adaptive algorithms like AdaGrad and AMSGrad are successful in nonconvex optimization
owing to their parameter-agnostic ability–requiring no a priori knowledge about problem …
owing to their parameter-agnostic ability–requiring no a priori knowledge about problem …
Adaptive stochastic variance reduction for non-convex finite-sum minimization
We propose an adaptive variance-reduction method, called AdaSpider, for minimization of $
L $-smooth, non-convex functions with a finite-sum structure. In essence, AdaSpider …
L $-smooth, non-convex functions with a finite-sum structure. In essence, AdaSpider …
Stochastic methods in variational inequalities: Ergodicity, bias and refinements
For min-max optimization and variational inequalities problems (VIPs), Stochastic
Extragradient (SEG) and Stochastic Gradient Descent Ascent (SGDA) have emerged as …
Extragradient (SEG) and Stochastic Gradient Descent Ascent (SGDA) have emerged as …
Exploration-exploitation in multi-agent competition: convergence with bounded rationality
The interplay between exploration and exploitation in competitive multi-agent learning is still
far from being well understood. Motivated by this, we study smooth Q-learning, a prototypical …
far from being well understood. Motivated by this, we study smooth Q-learning, a prototypical …
Smooth monotone stochastic variational inequalities and saddle point problems: A survey
This paper is a survey of methods for solving smooth,(strongly) monotone stochastic
variational inequalities. To begin with, we present the deterministic foundation from which …
variational inequalities. To begin with, we present the deterministic foundation from which …
Fast routing under uncertainty: Adaptive learning in congestion games via exponential weights
We examine an adaptive learning framework for nonatomic congestion games where the
players' cost functions may be subject to exogenous fluctuations (eg, due to disturbances in …
players' cost functions may be subject to exogenous fluctuations (eg, due to disturbances in …