Stochastic variance reduction for variational inequality methods

A Alacaoglu, Y Malitsky - Conference on Learning Theory, 2022 - proceedings.mlr.press
We propose stochastic variance reduced algorithms for solving convex-concave saddle
point problems, monotone variational inequalities, and monotone inclusions. Our framework …

Efficiently solving MDPs with stochastic mirror descent

Y **, A Sidford - International Conference on Machine …, 2020 - proceedings.mlr.press
We present a unified framework based on primal-dual stochastic mirror descent for
approximately solving infinite-horizon Markov decision processes (MDPs) given a …

[HTML][HTML] An improved quantum-inspired algorithm for linear regression

A Gilyén, Z Song, E Tang - Quantum, 2022 - quantum-journal.org
We give a classical algorithm for linear regression analogous to the quantum matrix
inversion algorithm [Harrow, Hassidim, and Lloyd, Physical Review Letters' 09] for low-rank …

Sharper rates for separable minimax and finite sum optimization via primal-dual extragradient methods

Y **, A Sidford, K Tian - Conference on Learning Theory, 2022 - proceedings.mlr.press
We design accelerated algorithms with improved rates for several fundamental classes of
optimization problems. Our algorithms all build upon techniques related to the analysis of …

Smooth monotone stochastic variational inequalities and saddle point problems: A survey

A Beznosikov, B Polyak, E Gorbunov… - European Mathematical …, 2023 - ems.press
This paper is a survey of methods for solving smooth,(strongly) monotone stochastic
variational inequalities. To begin with, we present the deterministic foundation from which …

Quantum speedups for zero-sum games via improved dynamic Gibbs sampling

A Bouland, YM Getachew, Y **… - International …, 2023 - proceedings.mlr.press
We give a quantum algorithm for computing an $\epsilon $-approximate Nash equilibrium of
a zero-sum game in a $ m\times n $ payoff matrix with bounded entries. Given a standard …

Relative lipschitzness in extragradient methods and a direct recipe for acceleration

MB Cohen, A Sidford, K Tian - arxiv preprint arxiv:2011.06572, 2020 - arxiv.org
We show that standard extragradient methods (ie mirror prox and dual extrapolation)
recover optimal accelerated rates for first-order minimization of smooth convex functions. To …

Lower complexity bounds of finite-sum optimization problems: The results and construction

Y Han, G **e, Z Zhang - Journal of Machine Learning Research, 2024 - jmlr.org
In this paper we study the lower complexity bounds for finite-sum optimization problems,
where the objective is the average of $ n $ individual component functions. We consider a …

Distributionally robust optimization via ball oracle acceleration

Y Carmon, D Hausler - Advances in Neural Information …, 2022 - proceedings.neurips.cc
We develop and analyze algorithms for distributionally robust optimization (DRO) of convex
losses. In particular, we consider group-structured and bounded $ f $-divergence uncertainty …

Linear-sized sparsifiers via near-linear time discrepancy theory

A Jambulapati, V Reis, K Tian - Proceedings of the 2024 Annual ACM-SIAM …, 2024 - SIAM
Discrepancy theory has provided powerful tools for producing higher-quality objects which
“beat the union bound” in fundamental settings throughout combinatorics and computer …