Optimal complexity in decentralized training

Y Lu, C De Sa - International conference on machine …, 2021 - proceedings.mlr.press
Decentralization is a promising method of scaling up parallel machine learning systems. In
this paper, we provide a tight lower bound on the iteration complexity for such methods in a …

Achieving acceleration for distributed economic dispatch in smart grids over directed networks

Q Lü, X Liao, H Li, T Huang - IEEE Transactions on Network …, 2020 - ieeexplore.ieee.org
In this paper, the economic dispatch problem (EDP) in smart grids is investigated over a
directed network, which concentrates on allocating the generation power among the …

Decentralized stochastic gradient tracking for non-convex empirical risk minimization

J Zhang, K You - arxiv preprint arxiv:1909.02712, 2019 - arxiv.org
This paper studies a decentralized stochastic gradient tracking (DSGT) algorithm for non-
convex empirical risk minimization problems over a peer-to-peer network of nodes, which is …

Optimal gradient tracking for decentralized optimization

Z Song, L Shi, S Pu, M Yan - Mathematical Programming, 2024 - Springer
In this paper, we focus on solving the decentralized optimization problem of minimizing the
sum of n objective functions over a multi-agent network. The agents are embedded in an …

Robust online learning over networks

N Bastianello, D Deplano… - … on Automatic Control, 2024 - ieeexplore.ieee.org
The recent deployment of multi-agent networks has enabled the distributed solution of
learning problems, where agents cooperate to train a global model without sharing their …

A Tutorial on Distributed Optimization for Cooperative Robotics: from Setups and Algorithms to Toolboxes and Research Directions

A Testa, G Carnevale, G Notarstefano - arxiv preprint arxiv:2309.04257, 2023 - arxiv.org
Several interesting problems in multi-robot systems can be cast in the framework of
distributed optimization. Examples include multi-robot task allocation, vehicle routing, target …

Hierarchical federated learning with multi-timescale gradient correction

W Fang, DJ Han, E Chen, S Wang… - arxiv preprint arxiv …, 2024 - arxiv.org
While traditional federated learning (FL) typically focuses on a star topology where clients
are directly connected to a central server, real-world distributed systems often exhibit …

Distributed optimization based on gradient tracking revisited: Enhancing convergence rate via surrogation

Y Sun, G Scutari, A Daneshmand - SIAM Journal on Optimization, 2022 - SIAM
We study distributed multiagent optimization over graphs. We consider the minimization of
F+G subject to convex constraints, where F is the smooth strongly convex sum of the agent's …

Distributed delayed dual averaging for distributed optimization over time-varying digraphs

D Wang, J Liu, J Lian, Y Liu, Z Wang, W Wang - Automatica, 2023 - Elsevier
In this paper, a push-sum based distributed delayed dual averaging algorithm (PS-DDDA) is
proposed to solve the distributed constrained optimization problem over the time-varying …

Distributed Nesterov gradient and heavy-ball double accelerated asynchronous optimization

H Li, H Cheng, Z Wang, GC Wu - IEEE Transactions on Neural …, 2020 - ieeexplore.ieee.org
In this article, we come up with a novel Nesterov gradient and heavy-ball double accelerated
distributed synchronous optimization algorithm, called NHDA, and adopt a general …