Accelerated Algorithms for Smooth Convex-Concave Minimax Problems with O (1/k^ 2) Rate on Squared Gradient Norm

TH Yoon, EK Ryu - International Conference on Machine …, 2021 - proceedings.mlr.press
In this work, we study the computational complexity of reducing the squared gradient
magnitude for smooth minimax optimization problems. First, we present algorithms with …

Adaptive learning in continuous games: Optimal regret bounds and convergence to nash equilibrium

YG Hsieh, K Antonakopoulos… - … on Learning Theory, 2021 - proceedings.mlr.press
In game-theoretic learning, several agents are simultaneously following their individual
interests, so the environment is non-stationary from each player's perspective. In this context …

AdaGrad avoids saddle points

K Antonakopoulos, P Mertikopoulos… - International …, 2022 - proceedings.mlr.press
Adaptive first-order methods in optimization have widespread ML applications due to their
ability to adapt to non-convex landscapes. However, their convergence guarantees are …

Solving nonconvex-nonconcave min-max problems exhibiting weak minty solutions

A Böhm - arxiv preprint arxiv:2201.12247, 2022 - arxiv.org
We investigate a structured class of nonconvex-nonconcave min-max problems exhibiting
so-called\emph {weak Minty} solutions, a notion which was only recently introduced, but is …

Nest your adaptive algorithm for parameter-agnostic nonconvex minimax optimization

J Yang, X Li, N He - Advances in Neural Information …, 2022 - proceedings.neurips.cc
Adaptive algorithms like AdaGrad and AMSGrad are successful in nonconvex optimization
owing to their parameter-agnostic ability–requiring no a priori knowledge about problem …

Adaptive stochastic variance reduction for non-convex finite-sum minimization

A Kavis, S Skoulakis… - Advances in …, 2022 - proceedings.neurips.cc
We propose an adaptive variance-reduction method, called AdaSpider, for minimization of $
L $-smooth, non-convex functions with a finite-sum structure. In essence, AdaSpider …

Stochastic methods in variational inequalities: Ergodicity, bias and refinements

EV Vlatakis-Gkaragkounis, A Giannou… - International …, 2024 - proceedings.mlr.press
For min-max optimization and variational inequalities problems (VIPs), Stochastic
Extragradient (SEG) and Stochastic Gradient Descent Ascent (SGDA) have emerged as …

Exploration-exploitation in multi-agent competition: convergence with bounded rationality

S Leonardos, G Piliouras… - Advances in Neural …, 2021 - proceedings.neurips.cc
The interplay between exploration and exploitation in competitive multi-agent learning is still
far from being well understood. Motivated by this, we study smooth Q-learning, a prototypical …

Smooth monotone stochastic variational inequalities and saddle point problems: A survey

A Beznosikov, B Polyak, E Gorbunov… - European Mathematical …, 2023 - ems.press
This paper is a survey of methods for solving smooth,(strongly) monotone stochastic
variational inequalities. To begin with, we present the deterministic foundation from which …

Fast routing under uncertainty: Adaptive learning in congestion games via exponential weights

DQ Vu, K Antonakopoulos… - Advances in Neural …, 2021 - proceedings.neurips.cc
We examine an adaptive learning framework for nonatomic congestion games where the
players' cost functions may be subject to exogenous fluctuations (eg, due to disturbances in …