Acceleration for compressed gradient descent in distributed and federated optimization

Z Li, D Kovalev, X Qian, P Richtárik - arxiv preprint arxiv:2002.11364, 2020 - arxiv.org
Due to the high communication cost in distributed and federated learning problems,
methods relying on compression of communicated messages are becoming increasingly …

SoteriaFL: A unified framework for private federated learning with communication compression

Z Li, H Zhao, B Li, Y Chi - Advances in Neural Information …, 2022 - proceedings.neurips.cc
To enable large-scale machine learning in bandwidth-hungry environments such as
wireless networks, significant progress has been made recently in designing communication …

Elastic aggregation for federated optimization

D Chen, J Hu, VJ Tan, X Wei… - Proceedings of the IEEE …, 2023 - openaccess.thecvf.com
Federated learning enables the privacy-preserving training of neural network models using
real-world data across distributed clients. FedAvg has become the preferred optimizer for …

Recent theoretical advances in non-convex optimization

M Danilova, P Dvurechensky, A Gasnikov… - … and Probability: With a …, 2022 - Springer
Motivated by recent increased interest in optimization algorithms for non-convex
optimization in application to training deep neural networks and other optimization problems …

EF21 with bells & whistles: Practical algorithmic extensions of modern error feedback

I Fatkhullin, I Sokolov, E Gorbunov, Z Li… - arxiv preprint arxiv …, 2021 - arxiv.org
First proposed by Seide (2014) as a heuristic, error feedback (EF) is a very popular
mechanism for enforcing convergence of distributed gradient-based optimization methods …