A modern introduction to online learning

F Orabona - arxiv preprint arxiv:1912.13213, 2019 - arxiv.org
In this monograph, I introduce the basic concepts of Online Learning through a modern view
of Online Convex Optimization. Here, online learning refers to the framework of regret …

On last-iterate convergence beyond zero-sum games

I Anagnostides, I Panageas, G Farina… - International …, 2022 - proceedings.mlr.press
Most existing results about last-iterate convergence of learning dynamics are limited to two-
player zero-sum games, and only apply under rigid assumptions about what dynamics the …

An improved cutting plane method for convex optimization, convex-concave games, and its applications

H Jiang, YT Lee, Z Song, SC Wong - … of the 52nd Annual ACM SIGACT …, 2020 - dl.acm.org
Given a separation oracle for a convex set K⊂ ℝ n that is contained in a box of radius R, the
goal is to either compute a point in K or prove that K does not contain a ball of radius є. We …

[PDF][PDF] Convex program duality, Fisher markets, and Nash social welfare

R Cole, N Devanur, V Gkatzelis, K Jain, T Mai… - Proceedings of the …, 2017 - dl.acm.org
The main focus of this paper is on the problem of maximizing the Nash social welfare (NSW).
In particular, given a collection of indivisible goods that needs to be allocated to a set of …

Adaptive gradient descent without descent

Y Malitsky, K Mishchenko - arxiv preprint arxiv:1910.09529, 2019 - arxiv.org
We present a strikingly simple proof that two rules are sufficient to automate gradient
descent: 1) don't increase the stepsize too fast and 2) don't overstep the local curvature. No …

Accelerated Bregman proximal gradient methods for relatively smooth convex optimization

F Hanzely, P Richtarik, L **ao - Computational Optimization and …, 2021 - Springer
We consider the problem of minimizing the sum of two convex functions: one is differentiable
and relatively smooth with respect to a reference convex function, and the other can be …

Mirror descent with relative smoothness in measure spaces, with application to sinkhorn and em

PC Aubin-Frankowski, A Korba… - Advances in Neural …, 2022 - proceedings.neurips.cc
Many problems in machine learning can be formulated as optimizing a convex functional
over a vector space of measures. This paper studies the convergence of the mirror descent …

Stochastic mirror descent: Convergence analysis and adaptive variants via the mirror stochastic polyak stepsize

R D'Orazio, N Loizou, I Laradji, I Mitliagkas - arxiv preprint arxiv …, 2021 - arxiv.org
We investigate the convergence of stochastic mirror descent (SMD) under interpolation in
relatively smooth and smooth convex optimization. In relatively smooth convex optimization …

Asynchronous proportional response dynamics: convergence in markets with adversarial scheduling

Y Kolumbus, M Levy, N Nisan - Advances in Neural …, 2024 - proceedings.neurips.cc
Abstract We study Proportional Response Dynamics (PRD) in linear Fisher markets, where
participants act asynchronously. We model this scenario as a sequential process in which at …

Meta-learning in games

K Harris, I Anagnostides, G Farina, M Khodak… - arxiv preprint arxiv …, 2022 - arxiv.org
In the literature on game-theoretic equilibrium finding, focus has mainly been on solving a
single game in isolation. In practice, however, strategic interactions--ranging from routing …