Accelerated first-order optimization algorithms for machine learning

H Li, C Fang, Z Lin - Proceedings of the IEEE, 2020 - ieeexplore.ieee.org
Numerical optimization serves as one of the pillars of machine learning. To meet the
demands of big data applications, lots of efforts have been put on designing theoretically …

[HTML][HTML] First-order methods for convex optimization

P Dvurechensky, S Shtern, M Staudigl - EURO Journal on Computational …, 2021 - Elsevier
First-order methods for solving convex optimization problems have been at the forefront of
mathematical optimization in the last 20 years. The rapid development of this important class …

Scaled, inexact, and adaptive generalized fista for strongly convex optimization

S Rebegoldi, L Calatroni - SIAM Journal on Optimization, 2022 - SIAM
We consider a variable metric and inexact version of the fast iterative soft-thresholding
algorithm (FISTA) type algorithm considered in [L. Calatroni and A. Chambolle, SIAM J …

Acceleration and restart for the randomized Bregman-Kaczmarz method

L Tondji, I Necoara, DA Lorenz - Linear Algebra and its Applications, 2024 - Elsevier
Optimizing strongly convex functions subject to linear constraints is a fundamental problem
with numerous applications. In this work, we propose a block (accelerated) randomized …

Quadratic error bound of the smoothed gap and the restarted averaged primal-dual hybrid gradient

O Fercoq - arxiv preprint arxiv:2206.03041, 2022 - arxiv.org
We study the linear convergence of the primal-dual hybrid gradient method. After a review of
current analyses, we show that they do not explain properly the behavior of the algorithm …

Parallel random block-coordinate forward–backward algorithm: a unified convergence analysis

S Salzo, S Villa - Mathematical Programming, 2022 - Springer
We study the block-coordinate forward–backward algorithm in which the blocks are updated
in a random and possibly parallel manner, according to arbitrary probabilities. The algorithm …

Rest-katyusha: exploiting the solution's structure via scheduled restart schemes

J Tang, M Golbabaee, F Bach - Advances in Neural …, 2018 - proceedings.neurips.cc
We propose a structure-adaptive variant of the state-of-the-art stochastic variance-reduced
gradient algorithm Katyusha for regularized empirical risk minimization. The proposed …

Hybrid heavy-ball systems: Reset methods for optimization with uncertainty

JH Le, AR Teel - 2021 American Control Conference (ACC), 2021 - ieeexplore.ieee.org
Momentum methods for convex optimization often rely on precise choices of algorithmic
parameters, based on knowledge of problem parameters, in order to achieve fast …

Coordinate descent methods beyond smoothness and separability

F Chorobura, I Necoara - Computational Optimization and Applications, 2024 - Springer
This paper deals with convex nonsmooth optimization problems. We introduce a general
smooth approximation framework for the original function and apply random (accelerated) …

A Guide to Stochastic Optimisation for Large-Scale Inverse Problems

MJ Ehrhardt, Z Kereta, J Liang, J Tang - arxiv preprint arxiv:2406.06342, 2024 - arxiv.org
Stochastic optimisation algorithms are the de facto standard for machine learning with large
amounts of data. Handling only a subset of available data in each optimisation step …