Solving nonconvex-nonconcave min-max problems exhibiting weak minty solutions

A Böhm - arxiv preprint arxiv:2201.12247, 2022 - arxiv.org
We investigate a structured class of nonconvex-nonconcave min-max problems exhibiting
so-called\emph {weak Minty} solutions, a notion which was only recently introduced, but is …

Fast Optimistic Gradient Descent Ascent (OGDA) method in continuous and discrete time

RI Boţ, ER Csetnek, DK Nguyen - Foundations of Computational …, 2023 - Springer
In the framework of real Hilbert spaces, we study continuous in time dynamics as well as
numerical algorithms for the problem of approaching the set of zeros of a single-valued …

A systematic approach to Lyapunov analyses of continuous-time models in convex optimization

C Moucer, A Taylor, F Bach - SIAM Journal on Optimization, 2023 - SIAM
First-order methods are often analyzed via their continuous-time models, where their worst-
case convergence properties are usually approached via Lyapunov functions. In this work …

Extragradient Type Methods for Riemannian Variational Inequality Problems

Z Hu, G Wang, X Wang, A Wibisono… - International …, 2024 - proceedings.mlr.press
In this work, we consider monotone Riemannian Variational Inequality Problems (RVIPs),
which encompass both Riemannian convex optimization and minimax optimization as …

A Primal-Dual Approach to Solving Variational Inequalities with General Constraints

T Chavdarova, T Yang, M Pagliardini… - The Twelfth International …, 2024 - openreview.net
Yang et al.(2023) recently showed how to use first-order gradient methods to solve general
variational inequalities (VIs) under a limiting assumption that analytic solutions of specific …

Riemannian optimistic algorithms

X Wang, D Yuan, Y Hong, Z Hu, L Wang… - arxiv preprint arxiv …, 2023 - arxiv.org
In this paper, we consider Riemannian online convex optimization with dynamic regret. First,
we propose two novel algorithms, namely the Riemannian Online Optimistic Gradient …

On a continuous time model of gradient descent dynamics and instability in deep learning

M Rosca, Y Wu, C Qin, B Dherin - arxiv preprint arxiv:2302.01952, 2023 - arxiv.org
The recipe behind the success of deep learning has been the combination of neural
networks and gradient-based optimization. Understanding the behavior of gradient descent …

Continuous-time analysis for variational inequalities: An overview and desiderata

T Chavdarova, YP Hsieh, MI Jordan - arxiv preprint arxiv:2207.07105, 2022 - arxiv.org
Algorithms that solve zero-sum games, multi-objective agent objectives, or, more generally,
variational inequality (VI) problems are notoriously unstable on general problems. Owing to …

A fast optimistic method for monotone variational inequalities

M Sedlmayer, DK Nguyen… - … Conference on Machine …, 2023 - proceedings.mlr.press
We study monotone variational inequalities that can arise as optimality conditions for
constrained convex optimization or convex-concave minimax problems and propose a novel …

SDEs for Minimax Optimization

EM Compagnoni, A Orvieto, H Kersting… - International …, 2024 - proceedings.mlr.press
Minimax optimization problems have attracted a lot of attention over the past few years, with
applications ranging from economics to machine learning. While advanced optimization …