A forward-backward splitting method for monotone inclusions without cocoercivity

Y Malitsky, MK Tam - SIAM Journal on Optimization, 2020 - SIAM
In this work, we propose a simple modification of the forward-backward splitting method for
finding a zero in the sum of two monotone operators. Our method converges under the same …

Accelerated Algorithms for Smooth Convex-Concave Minimax Problems with O (1/k^ 2) Rate on Squared Gradient Norm

TH Yoon, EK Ryu - International Conference on Machine …, 2021 - proceedings.mlr.press
In this work, we study the computational complexity of reducing the squared gradient
magnitude for smooth minimax optimization problems. First, we present algorithms with …

[HTML][HTML] Convergence of sequences: A survey

B Franci, S Grammatico - Annual Reviews in Control, 2022 - Elsevier
Convergent sequences of real numbers play a fundamental role in many different problems
in system theory, eg, in Lyapunov stability analysis, as well as in optimization theory and …

Operator splitting performance estimation: Tight contraction factors and optimal parameter selection

EK Ryu, AB Taylor, C Bergeling, P Giselsson - SIAM Journal on Optimization, 2020 - SIAM
We propose a methodology for studying the performance of common splitting methods
through semidefinite programming. We prove tightness of the methodology and demonstrate …

Solving nonconvex-nonconcave min-max problems exhibiting weak minty solutions

A Böhm - ar** in the continuous version of Nesterov's accelerated
gradient method provides, by temporal discretization, fast proximal gradient algorithms …

Accelerated minimax algorithms flock together

TH Yoon, EK Ryu - SIAM Journal on Optimization, 2025 - SIAM
Several new accelerated methods in minimax optimization and fixed-point iterations have
recently been discovered, and, interestingly, they rely on a mechanism distinct from …

Two Steps at a Time---Taking GAN Training in Stride with Tseng's Method

A Bohm, M Sedlmayer, ER Csetnek, RI Bot - SIAM Journal on Mathematics of …, 2022 - SIAM
Motivated by the training of generative adversarial networks (GANs), we study methods for
solving minimax problems with additional nonsmooth regularizers. We do so by employing …