A modern introduction to online learning
F Orabona - arxiv preprint arxiv:1912.13213, 2019 - arxiv.org
In this monograph, I introduce the basic concepts of Online Learning through a modern view
of Online Convex Optimization. Here, online learning refers to the framework of regret …
of Online Convex Optimization. Here, online learning refers to the framework of regret …
On last-iterate convergence beyond zero-sum games
Most existing results about last-iterate convergence of learning dynamics are limited to two-
player zero-sum games, and only apply under rigid assumptions about what dynamics the …
player zero-sum games, and only apply under rigid assumptions about what dynamics the …
An improved cutting plane method for convex optimization, convex-concave games, and its applications
Given a separation oracle for a convex set K⊂ ℝ n that is contained in a box of radius R, the
goal is to either compute a point in K or prove that K does not contain a ball of radius є. We …
goal is to either compute a point in K or prove that K does not contain a ball of radius є. We …
[PDF][PDF] Convex program duality, Fisher markets, and Nash social welfare
The main focus of this paper is on the problem of maximizing the Nash social welfare (NSW).
In particular, given a collection of indivisible goods that needs to be allocated to a set of …
In particular, given a collection of indivisible goods that needs to be allocated to a set of …
Adaptive gradient descent without descent
We present a strikingly simple proof that two rules are sufficient to automate gradient
descent: 1) don't increase the stepsize too fast and 2) don't overstep the local curvature. No …
descent: 1) don't increase the stepsize too fast and 2) don't overstep the local curvature. No …
Accelerated Bregman proximal gradient methods for relatively smooth convex optimization
We consider the problem of minimizing the sum of two convex functions: one is differentiable
and relatively smooth with respect to a reference convex function, and the other can be …
and relatively smooth with respect to a reference convex function, and the other can be …
Mirror descent with relative smoothness in measure spaces, with application to sinkhorn and em
Many problems in machine learning can be formulated as optimizing a convex functional
over a vector space of measures. This paper studies the convergence of the mirror descent …
over a vector space of measures. This paper studies the convergence of the mirror descent …
Stochastic mirror descent: Convergence analysis and adaptive variants via the mirror stochastic polyak stepsize
We investigate the convergence of stochastic mirror descent (SMD) under interpolation in
relatively smooth and smooth convex optimization. In relatively smooth convex optimization …
relatively smooth and smooth convex optimization. In relatively smooth convex optimization …
Asynchronous proportional response dynamics: convergence in markets with adversarial scheduling
Abstract We study Proportional Response Dynamics (PRD) in linear Fisher markets, where
participants act asynchronously. We model this scenario as a sequential process in which at …
participants act asynchronously. We model this scenario as a sequential process in which at …
Meta-learning in games
In the literature on game-theoretic equilibrium finding, focus has mainly been on solving a
single game in isolation. In practice, however, strategic interactions--ranging from routing …
single game in isolation. In practice, however, strategic interactions--ranging from routing …