Adaptive, doubly optimal no-regret learning in strongly monotone and exp-concave games with gradient feedback

M Jordan, T Lin, Z Zhou - Operations Research, 2024 - pubsonline.informs.org
Online gradient descent (OGD) is well-known to be doubly optimal under strong convexity or
monotonicity assumptions:(1) in the single-agent setting, it achieves an optimal regret of Θ …

An accelerated variance reduced extra-point approach to finite-sum vi and optimization

K Huang, N Wang, S Zhang - arxiv preprint arxiv:2211.03269, 2022 - arxiv.org
In this paper, we develop stochastic variance reduced algorithms for solving a class of finite-
sum monotone VI, where the operator consists of the sum of finitely many monotone VI …

General Procedure to Provide High-Probability Guarantees for Stochastic Saddle Point Problems

D Li, H Li, J Zhang - Journal of Scientific Computing, 2024 - Springer
This paper considers smooth strongly convex and strongly concave stochastic saddle point
(SSP) problems. Suppose there is an arbitrary oracle that in expectation returns an ϵ …

Distributed stochastic Nash equilibrium seeking under heavy-tailed noises

C Sun, B Chen, J Wang, Z Wang, L Yu - Automatica, 2025 - Elsevier
This paper studies the distributed stochastic Nash equilibrium seeking problem under heavy-
tailed noises. Unlike the traditional stochastic Nash equilibrium algorithms, where the …

A distributed stochastic forward-backward-forward self-adaptive algorithm for Cartesian stochastic variational inequalities

L Liu, X Qin, JC Yao - Applied Numerical Mathematics, 2025 - Elsevier
In this paper, we consider a Cartesian stochastic variational inequality with a high
dimensional solution space. This mathematical formulation captures a wide range of …

Accelerated Stochastic Min-Max Optimization Based on Bias-corrected Momentum

H Cai, SA Alghunaim, AH Sayed - arxiv preprint arxiv:2406.13041, 2024 - arxiv.org
Lower-bound analyses for nonconvex strongly-concave minimax optimization problems
have shown that stochastic first-order algorithms require at least $\mathcal {O}(\varepsilon …

Stochastic Approximation Proximal Subgradient Method for Stochastic Convex-Concave Minimax Optimization

YH Dai, J Wang, L Zhang - arxiv preprint arxiv:2403.20205, 2024 - arxiv.org
This paper presents a stochastic approximation proximal subgradient (SAPS) method for
stochastic convex-concave minimax optimization. By accessing unbiased and variance …

Dynamic stochastic projection method for multistage stochastic variational inequalities

B Zhou, J Jiang, H Sun - Computational Optimization and Applications, 2024 - Springer
Stochastic approximation (SA) type methods have been well studied for solving single-stage
stochastic variational inequalities (SVIs). This paper proposes a dynamic stochastic …

First-order methods for stochastic variational inequality problems with function constraints

D Boob, Q Deng, M Khalafi - arxiv preprint arxiv:2304.04778, 2023 - arxiv.org
The monotone Variational Inequality (VI) is a general model with important applications in
various engineering and scientific domains. In numerous instances, the VI problems are …

Rapid Learning in Constrained Minimax Games with Negative Momentum

Z Fang, Z Liu, C Yu, C Hu - arxiv preprint arxiv:2501.00533, 2024 - arxiv.org
In this paper, we delve into the utilization of the negative momentum technique in
constrained minimax games. From an intuitive mechanical standpoint, we introduce a novel …