Closing the gap: Tighter analysis of alternating stochastic gradient methods for bilevel problems

T Chen, Y Sun, W Yin - Advances in Neural Information …, 2021‏ - proceedings.neurips.cc
Stochastic nested optimization, including stochastic compositional, min-max, and bilevel
optimization, is gaining popularity in many machine learning applications. While the three …

Fednest: Federated bilevel, minimax, and compositional optimization

DA Tarzanagh, M Li… - … on Machine Learning, 2022‏ - proceedings.mlr.press
Standard federated optimization methods successfully apply to stochastic problems with
single-level structure. However, many contemporary ML problems-including adversarial …

Fast extra gradient methods for smooth structured nonconvex-nonconcave minimax problems

S Lee, D Kim - Advances in Neural Information Processing …, 2021‏ - proceedings.neurips.cc
Modern minimax problems, such as generative adversarial network and adversarial training,
are often under a nonconvex-nonconcave setting, and develo** an efficient method for …

Federated minimax optimization: Improved convergence analyses and algorithms

P Sharma, R Panda, G Joshi… - … on Machine Learning, 2022‏ - proceedings.mlr.press
In this paper, we consider nonconvex minimax optimization, which is gaining prominence in
many modern machine learning applications, such as GANs. Large-scale edge-based …

The confluence of networks, games, and learning a game-theoretic framework for multiagent decision making over networks

T Li, G Peng, Q Zhu, T Başar - IEEE Control Systems Magazine, 2022‏ - ieeexplore.ieee.org
Multiagent decision making over networks has recently attracted an exponentially growing
number of researchers from the systems and control community. The area has gained …

Faster single-loop algorithms for minimax optimization without strong concavity

J Yang, A Orvieto, A Lucchi… - … Conference on Artificial …, 2022‏ - proceedings.mlr.press
Gradient descent ascent (GDA), the simplest single-loop algorithm for nonconvex minimax
optimization, is widely used in practical applications such as generative adversarial …

Stochastic gradient descent-ascent: Unified theory and new efficient methods

A Beznosikov, E Gorbunov… - International …, 2023‏ - proceedings.mlr.press
Abstract Stochastic Gradient Descent-Ascent (SGDA) is one of the most prominent
algorithms for solving min-max optimization and variational inequalities problems (VIP) …

The complexity of nonconvex-strongly-concave minimax optimization

S Zhang, J Yang, C Guzmán… - Uncertainty in …, 2021‏ - proceedings.mlr.press
This paper studies the complexity for finding approximate stationary points of nonconvex-
strongly-concave (NC-SC) smooth minimax problems, in both general and averaged smooth …

Solving a class of non-convex minimax optimization in federated learning

X Wu, J Sun, Z Hu, A Zhang… - Advances in Neural …, 2023‏ - proceedings.neurips.cc
The minimax problems arise throughout machine learning applications, ranging from
adversarial training and policy evaluation in reinforcement learning to AUROC …

Stochastic gradient descent-ascent and consensus optimization for smooth games: Convergence analysis under expected co-coercivity

N Loizou, H Berard, G Gidel… - Advances in …, 2021‏ - proceedings.neurips.cc
Two of the most prominent algorithms for solving unconstrained smooth games are the
classical stochastic gradient descent-ascent (SGDA) and the recently introduced stochastic …