Adaptive learning in continuous games: Optimal regret bounds and convergence to nash equilibrium
In game-theoretic learning, several agents are simultaneously following their individual
interests, so the environment is non-stationary from each player's perspective. In this context …
interests, so the environment is non-stationary from each player's perspective. In this context …
Solving nonconvex-nonconcave min-max problems exhibiting weak minty solutions
A Böhm - arxiv preprint arxiv:2201.12247, 2022 - arxiv.org
We investigate a structured class of nonconvex-nonconcave min-max problems exhibiting
so-called\emph {weak Minty} solutions, a notion which was only recently introduced, but is …
so-called\emph {weak Minty} solutions, a notion which was only recently introduced, but is …
Fast stochastic bregman gradient methods: Sharp analysis and variance reduction
We study the problem of minimizing a relatively-smooth convex function using stochastic
Bregman gradient methods. We first prove the convergence of Bregman Stochastic Gradient …
Bregman gradient methods. We first prove the convergence of Bregman Stochastic Gradient …
Adaptive extra-gradient methods for min-max optimization and games
We present a new family of min-max optimization algorithms that automatically exploit the
geometry of the gradient data observed at earlier iterations to perform more informative extra …
geometry of the gradient data observed at earlier iterations to perform more informative extra …
Nest your adaptive algorithm for parameter-agnostic nonconvex minimax optimization
Adaptive algorithms like AdaGrad and AMSGrad are successful in nonconvex optimization
owing to their parameter-agnostic ability–requiring no a priori knowledge about problem …
owing to their parameter-agnostic ability–requiring no a priori knowledge about problem …
Decentralized local stochastic extra-gradient for variational inequalities
We consider distributed stochastic variational inequalities (VIs) on unbounded domains with
the problem data that is heterogeneous (non-IID) and distributed across many devices. We …
the problem data that is heterogeneous (non-IID) and distributed across many devices. We …
A novel projection neural network for solving a class of monotone variational inequalities
This article provides a novel projection neural network (PNN) for a category of monotone
variational inequality (MVI). For simplifying calculation, the feasible region of MVI is …
variational inequality (MVI). For simplifying calculation, the feasible region of MVI is …
Extra-newton: A first approach to noise-adaptive accelerated second-order methods
In this work, we propose a universal and adaptive second-order method for minimization of
second-order smooth, convex functions. Precisely, our algorithm achieves $ O (\sigma/\sqrt …
second-order smooth, convex functions. Precisely, our algorithm achieves $ O (\sigma/\sqrt …
No-regret learning in games with noisy feedback: Faster rates and adaptivity via learning rate separation
We examine the problem of regret minimization when the learner is involved in a continuous
game with other optimizing agents: in this case, if all players follow a no-regret algorithm, it is …
game with other optimizing agents: in this case, if all players follow a no-regret algorithm, it is …
Inexact model: A framework for optimization and variational inequalities
In this paper, we propose a general algorithmic framework for the first-order methods in
optimization in a broad sense, including minimization problems, saddle-point problems and …
optimization in a broad sense, including minimization problems, saddle-point problems and …