Adaptive, doubly optimal no-regret learning in strongly monotone and exp-concave games with gradient feedback
Online gradient descent (OGD) is well-known to be doubly optimal under strong convexity or
monotonicity assumptions:(1) in the single-agent setting, it achieves an optimal regret of Θ …
monotonicity assumptions:(1) in the single-agent setting, it achieves an optimal regret of Θ …
An accelerated variance reduced extra-point approach to finite-sum vi and optimization
In this paper, we develop stochastic variance reduced algorithms for solving a class of finite-
sum monotone VI, where the operator consists of the sum of finitely many monotone VI …
sum monotone VI, where the operator consists of the sum of finitely many monotone VI …
General Procedure to Provide High-Probability Guarantees for Stochastic Saddle Point Problems
D Li, H Li, J Zhang - Journal of Scientific Computing, 2024 - Springer
This paper considers smooth strongly convex and strongly concave stochastic saddle point
(SSP) problems. Suppose there is an arbitrary oracle that in expectation returns an ϵ …
(SSP) problems. Suppose there is an arbitrary oracle that in expectation returns an ϵ …
Distributed stochastic Nash equilibrium seeking under heavy-tailed noises
This paper studies the distributed stochastic Nash equilibrium seeking problem under heavy-
tailed noises. Unlike the traditional stochastic Nash equilibrium algorithms, where the …
tailed noises. Unlike the traditional stochastic Nash equilibrium algorithms, where the …
A distributed stochastic forward-backward-forward self-adaptive algorithm for Cartesian stochastic variational inequalities
In this paper, we consider a Cartesian stochastic variational inequality with a high
dimensional solution space. This mathematical formulation captures a wide range of …
dimensional solution space. This mathematical formulation captures a wide range of …
Accelerated Stochastic Min-Max Optimization Based on Bias-corrected Momentum
Lower-bound analyses for nonconvex strongly-concave minimax optimization problems
have shown that stochastic first-order algorithms require at least $\mathcal {O}(\varepsilon …
have shown that stochastic first-order algorithms require at least $\mathcal {O}(\varepsilon …
Stochastic Approximation Proximal Subgradient Method for Stochastic Convex-Concave Minimax Optimization
YH Dai, J Wang, L Zhang - arxiv preprint arxiv:2403.20205, 2024 - arxiv.org
This paper presents a stochastic approximation proximal subgradient (SAPS) method for
stochastic convex-concave minimax optimization. By accessing unbiased and variance …
stochastic convex-concave minimax optimization. By accessing unbiased and variance …
Dynamic stochastic projection method for multistage stochastic variational inequalities
Stochastic approximation (SA) type methods have been well studied for solving single-stage
stochastic variational inequalities (SVIs). This paper proposes a dynamic stochastic …
stochastic variational inequalities (SVIs). This paper proposes a dynamic stochastic …
First-order methods for stochastic variational inequality problems with function constraints
The monotone Variational Inequality (VI) is a general model with important applications in
various engineering and scientific domains. In numerous instances, the VI problems are …
various engineering and scientific domains. In numerous instances, the VI problems are …
Rapid Learning in Constrained Minimax Games with Negative Momentum
Z Fang, Z Liu, C Yu, C Hu - arxiv preprint arxiv:2501.00533, 2024 - arxiv.org
In this paper, we delve into the utilization of the negative momentum technique in
constrained minimax games. From an intuitive mechanical standpoint, we introduce a novel …
constrained minimax games. From an intuitive mechanical standpoint, we introduce a novel …