Accelerated first-order optimization algorithms for machine learning
Numerical optimization serves as one of the pillars of machine learning. To meet the
demands of big data applications, lots of efforts have been put on designing theoretically …
demands of big data applications, lots of efforts have been put on designing theoretically …
[HTML][HTML] First-order methods for convex optimization
First-order methods for solving convex optimization problems have been at the forefront of
mathematical optimization in the last 20 years. The rapid development of this important class …
mathematical optimization in the last 20 years. The rapid development of this important class …
Scaled, inexact, and adaptive generalized fista for strongly convex optimization
We consider a variable metric and inexact version of the fast iterative soft-thresholding
algorithm (FISTA) type algorithm considered in [L. Calatroni and A. Chambolle, SIAM J …
algorithm (FISTA) type algorithm considered in [L. Calatroni and A. Chambolle, SIAM J …
Acceleration and restart for the randomized Bregman-Kaczmarz method
Optimizing strongly convex functions subject to linear constraints is a fundamental problem
with numerous applications. In this work, we propose a block (accelerated) randomized …
with numerous applications. In this work, we propose a block (accelerated) randomized …
Quadratic error bound of the smoothed gap and the restarted averaged primal-dual hybrid gradient
O Fercoq - arxiv preprint arxiv:2206.03041, 2022 - arxiv.org
We study the linear convergence of the primal-dual hybrid gradient method. After a review of
current analyses, we show that they do not explain properly the behavior of the algorithm …
current analyses, we show that they do not explain properly the behavior of the algorithm …
Parallel random block-coordinate forward–backward algorithm: a unified convergence analysis
We study the block-coordinate forward–backward algorithm in which the blocks are updated
in a random and possibly parallel manner, according to arbitrary probabilities. The algorithm …
in a random and possibly parallel manner, according to arbitrary probabilities. The algorithm …
Rest-katyusha: exploiting the solution's structure via scheduled restart schemes
We propose a structure-adaptive variant of the state-of-the-art stochastic variance-reduced
gradient algorithm Katyusha for regularized empirical risk minimization. The proposed …
gradient algorithm Katyusha for regularized empirical risk minimization. The proposed …
Hybrid heavy-ball systems: Reset methods for optimization with uncertainty
Momentum methods for convex optimization often rely on precise choices of algorithmic
parameters, based on knowledge of problem parameters, in order to achieve fast …
parameters, based on knowledge of problem parameters, in order to achieve fast …
Coordinate descent methods beyond smoothness and separability
This paper deals with convex nonsmooth optimization problems. We introduce a general
smooth approximation framework for the original function and apply random (accelerated) …
smooth approximation framework for the original function and apply random (accelerated) …
A Guide to Stochastic Optimisation for Large-Scale Inverse Problems
Stochastic optimisation algorithms are the de facto standard for machine learning with large
amounts of data. Handling only a subset of available data in each optimisation step …
amounts of data. Handling only a subset of available data in each optimisation step …