Distributed Optimization Methods for Multi-Robot Systems: Part I--A Tutorial

O Shorinwa, T Halsted, J Yu, M Schwager - arxiv preprint arxiv …, 2023 - arxiv.org
Distributed optimization provides a framework for deriving distributed algorithms for a variety
of multi-robot problems. This tutorial constitutes the first part of a two-part series on …

An improved analysis of gradient tracking for decentralized machine learning

A Koloskova, T Lin, SU Stich - Advances in Neural …, 2021 - proceedings.neurips.cc
We consider decentralized machine learning over a network where the training data is
distributed across $ n $ agents, each of which can compute stochastic model updates on …

A decentralized proximal-gradient method with network independent step-sizes and separated convergence rates

Z Li, W Shi, M Yan - IEEE Transactions on Signal Processing, 2019 - ieeexplore.ieee.org
This paper proposes a novel proximal-gradient algorithm for a decentralized optimization
problem with a composite objective containing smooth and nonsmooth terms. Specifically …

A general framework for decentralized optimization with first-order methods

R **n, S Pu, A Nedić, UA Khan - Proceedings of the IEEE, 2020 - ieeexplore.ieee.org
Decentralized optimization to minimize a finite sum of functions, distributed over a network of
nodes, has been a significant area within control and signal-processing research due to its …

Exact diffusion for distributed optimization and learning—Part I: Algorithm development

K Yuan, B Ying, X Zhao… - IEEE Transactions on …, 2018 - ieeexplore.ieee.org
This paper develops a distributed optimization strategy with guaranteed exact convergence
for a broad class of left-stochastic combination policies. The resulting exact diffusion strategy …

Networked signal and information processing: Learning by multiagent systems

S Vlaski, S Kar, AH Sayed… - IEEE Signal Processing …, 2023 - ieeexplore.ieee.org
This article reviews significant advances in networked signal and information processing
(SIP), which have enabled in the last 25 years extending decision making and inference …

Distributed heavy-ball: A generalization and acceleration of first-order methods with gradient tracking

R **n, UA Khan - IEEE Transactions on Automatic Control, 2019 - ieeexplore.ieee.org
We study distributed optimization to minimize a sum of smooth and strongly-convex
functions. Recent work on this problem uses gradient tracking to achieve linear convergence …

Decentralized proximal gradient algorithms with linear convergence rates

SA Alghunaim, EK Ryu, K Yuan… - IEEE Transactions on …, 2020 - ieeexplore.ieee.org
This article studies a class of nonsmooth decentralized multiagent optimization problems
where the agents aim at minimizing a sum of local strongly-convex smooth components plus …

Distributed algorithms for composite optimization: Unified framework and convergence analysis

J Xu, Y Tian, Y Sun, G Scutari - IEEE Transactions on Signal …, 2021 - ieeexplore.ieee.org
We study distributed composite optimization over networks: agents minimize a sum of
smooth (strongly) convex functions–the agents' sum-utility–plus a nonsmooth (extended …

A unification and generalization of exact distributed first-order methods

D Jakovetić - IEEE Transactions on Signal and Information …, 2018 - ieeexplore.ieee.org
Recently, there has been significant progress in the development of distributed first-order
methods. In particular, Shi et al.(2015) on the one hand and Qu and Li (2017) and Nedic et …