FedNL: Making Newton-type methods applicable to federated learning
Inspired by recent work of Islamov et al (2021), we propose a family of Federated Newton
Learn (FedNL) methods, which we believe is a marked step in the direction of making …
Learn (FedNL) methods, which we believe is a marked step in the direction of making …
Matrix compression via randomized low rank and low precision factorization
Matrices are exceptionally useful in various fields of study as they provide a convenient
framework to organize and manipulate data in a structured manner. However, modern …
framework to organize and manipulate data in a structured manner. However, modern …
Basis matters: better communication-efficient second order methods for federated learning
Recent advances in distributed optimization have shown that Newton-type methods with
proper communication compression mechanisms can guarantee fast local rates and low …
proper communication compression mechanisms can guarantee fast local rates and low …
Distributed optimization methods for multi-robot systems: Part 2—A survey
Although the field of distributed optimization is well developed, relevant literature focused on
the application of distributed optimization to multi-robot problems is limited. This survey …
the application of distributed optimization to multi-robot problems is limited. This survey …
[HTML][HTML] SHED: A Newton-type algorithm for federated learning based on incremental Hessian eigenvector sharing
There is a growing interest in the distributed optimization framework that goes under the
name of Federated Learning (FL). In particular, much attention is being turned to FL …
name of Federated Learning (FL). In particular, much attention is being turned to FL …
Distributed adaptive greedy quasi-Newton methods with explicit non-asymptotic convergence bounds
Y Du, K You - Automatica, 2024 - Elsevier
Though quasi-Newton methods have been extensively studied in the literature, they either
suffer from local convergence or use a series of line searches for global convergence which …
suffer from local convergence or use a series of line searches for global convergence which …
Distributed Newton-type methods with communication compression and bernoulli aggregation
Despite their high computation and communication costs, Newton-type methods remain an
appealing option for distributed training due to their robustness against ill-conditioned …
appealing option for distributed training due to their robustness against ill-conditioned …
Distributed principal component analysis with limited communication
We study efficient distributed algorithms for the fundamental problem of principal component
analysis and leading eigenvector computation on the sphere, when the data are randomly …
analysis and leading eigenvector computation on the sphere, when the data are randomly …
Large deviations for products of non-identically distributed network matrices with applications to communication-efficient distributed learning and inference
N Petrović, D Bajović, S Kar… - IEEE Transactions on …, 2023 - ieeexplore.ieee.org
This paper studies products of independent but non-identically distributed random network
matrices that arise as weight matrices in distributed consensus-type computation and …
matrices that arise as weight matrices in distributed consensus-type computation and …
Variance reduced distributed non-convex optimization using matrix stepsizes
Matrix-stepsized gradient descent algorithms have been shown to have superior
performance in non-convex optimization problems compared to their scalar counterparts …
performance in non-convex optimization problems compared to their scalar counterparts …