Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Stochastic variance reduction for variational inequality methods
We propose stochastic variance reduced algorithms for solving convex-concave saddle
point problems, monotone variational inequalities, and monotone inclusions. Our framework …
point problems, monotone variational inequalities, and monotone inclusions. Our framework …
Efficiently solving MDPs with stochastic mirror descent
We present a unified framework based on primal-dual stochastic mirror descent for
approximately solving infinite-horizon Markov decision processes (MDPs) given a …
approximately solving infinite-horizon Markov decision processes (MDPs) given a …
[HTML][HTML] An improved quantum-inspired algorithm for linear regression
We give a classical algorithm for linear regression analogous to the quantum matrix
inversion algorithm [Harrow, Hassidim, and Lloyd, Physical Review Letters' 09] for low-rank …
inversion algorithm [Harrow, Hassidim, and Lloyd, Physical Review Letters' 09] for low-rank …
Sharper rates for separable minimax and finite sum optimization via primal-dual extragradient methods
We design accelerated algorithms with improved rates for several fundamental classes of
optimization problems. Our algorithms all build upon techniques related to the analysis of …
optimization problems. Our algorithms all build upon techniques related to the analysis of …
Smooth monotone stochastic variational inequalities and saddle point problems: A survey
This paper is a survey of methods for solving smooth,(strongly) monotone stochastic
variational inequalities. To begin with, we present the deterministic foundation from which …
variational inequalities. To begin with, we present the deterministic foundation from which …
Quantum speedups for zero-sum games via improved dynamic Gibbs sampling
We give a quantum algorithm for computing an $\epsilon $-approximate Nash equilibrium of
a zero-sum game in a $ m\times n $ payoff matrix with bounded entries. Given a standard …
a zero-sum game in a $ m\times n $ payoff matrix with bounded entries. Given a standard …
Relative lipschitzness in extragradient methods and a direct recipe for acceleration
We show that standard extragradient methods (ie mirror prox and dual extrapolation)
recover optimal accelerated rates for first-order minimization of smooth convex functions. To …
recover optimal accelerated rates for first-order minimization of smooth convex functions. To …
Lower complexity bounds of finite-sum optimization problems: The results and construction
In this paper we study the lower complexity bounds for finite-sum optimization problems,
where the objective is the average of $ n $ individual component functions. We consider a …
where the objective is the average of $ n $ individual component functions. We consider a …
Distributionally robust optimization via ball oracle acceleration
Y Carmon, D Hausler - Advances in Neural Information …, 2022 - proceedings.neurips.cc
We develop and analyze algorithms for distributionally robust optimization (DRO) of convex
losses. In particular, we consider group-structured and bounded $ f $-divergence uncertainty …
losses. In particular, we consider group-structured and bounded $ f $-divergence uncertainty …
Linear-sized sparsifiers via near-linear time discrepancy theory
Discrepancy theory has provided powerful tools for producing higher-quality objects which
“beat the union bound” in fundamental settings throughout combinatorics and computer …
“beat the union bound” in fundamental settings throughout combinatorics and computer …