FedNL: Making Newton-type methods applicable to federated learning

M Safaryan, R Islamov, X Qian, P Richtárik - arxiv preprint arxiv …, 2021 - arxiv.org
Inspired by recent work of Islamov et al (2021), we propose a family of Federated Newton
Learn (FedNL) methods, which we believe is a marked step in the direction of making …

Matrix compression via randomized low rank and low precision factorization

R Saha, V Srivastava, M Pilanci - Advances in Neural …, 2023 - proceedings.neurips.cc
Matrices are exceptionally useful in various fields of study as they provide a convenient
framework to organize and manipulate data in a structured manner. However, modern …

Basis matters: better communication-efficient second order methods for federated learning

X Qian, R Islamov, M Safaryan, P Richtárik - arxiv preprint arxiv …, 2021 - arxiv.org
Recent advances in distributed optimization have shown that Newton-type methods with
proper communication compression mechanisms can guarantee fast local rates and low …

Distributed optimization methods for multi-robot systems: Part 2—A survey

O Shorinwa, T Halsted, J Yu… - IEEE Robotics & …, 2024 - ieeexplore.ieee.org
Although the field of distributed optimization is well developed, relevant literature focused on
the application of distributed optimization to multi-robot problems is limited. This survey …

[HTML][HTML] SHED: A Newton-type algorithm for federated learning based on incremental Hessian eigenvector sharing

N Dal Fabbro, S Dey, M Rossi, L Schenato - Automatica, 2024 - Elsevier
There is a growing interest in the distributed optimization framework that goes under the
name of Federated Learning (FL). In particular, much attention is being turned to FL …

Distributed adaptive greedy quasi-Newton methods with explicit non-asymptotic convergence bounds

Y Du, K You - Automatica, 2024 - Elsevier
Though quasi-Newton methods have been extensively studied in the literature, they either
suffer from local convergence or use a series of line searches for global convergence which …

Distributed Newton-type methods with communication compression and bernoulli aggregation

R Islamov, X Qian, S Hanzely, M Safaryan… - … on Machine Learning …, 2023 - openreview.net
Despite their high computation and communication costs, Newton-type methods remain an
appealing option for distributed training due to their robustness against ill-conditioned …

Distributed principal component analysis with limited communication

F Alimisis, P Davies… - Advances in Neural …, 2021 - proceedings.neurips.cc
We study efficient distributed algorithms for the fundamental problem of principal component
analysis and leading eigenvector computation on the sphere, when the data are randomly …

Large deviations for products of non-identically distributed network matrices with applications to communication-efficient distributed learning and inference

N Petrović, D Bajović, S Kar… - IEEE Transactions on …, 2023 - ieeexplore.ieee.org
This paper studies products of independent but non-identically distributed random network
matrices that arise as weight matrices in distributed consensus-type computation and …

Variance reduced distributed non-convex optimization using matrix stepsizes

H Li, A Karagulyan, P Richtárik - 2024 - repository.kaust.edu.sa
Matrix-stepsized gradient descent algorithms have been shown to have superior
performance in non-convex optimization problems compared to their scalar counterparts …