An accelerated distributed stochastic gradient method with momentum

K Huang, S Pu, A Nedić - arxiv preprint arxiv:2402.09714, 2024 - arxiv.org
In this paper, we introduce an accelerated distributed stochastic gradient method with
momentum for solving the distributed optimization problem, where a group of $ n $ agents …

Decentralized gradient-free methods for stochastic non-smooth non-convex optimization

Z Lin, J **a, Q Deng, L Luo - Proceedings of the AAAI Conference on …, 2024 - ojs.aaai.org
We consider decentralized gradient-free optimization of minimizing Lipschitz continuous
functions that satisfy neither smoothness nor convexity assumption. We propose two novel …

Cedas: A compressed decentralized stochastic gradient method with improved convergence

K Huang, S Pu - IEEE Transactions on Automatic Control, 2024 - ieeexplore.ieee.org
In this paper, we consider solving the distributed optimization problem over a multi-agent
network under the communication restricted setting. We study a compressed decentralized …

Problem-parameter-free decentralized nonconvex stochastic optimization

J Li, X Chen, S Ma, M Hong - arxiv preprint arxiv:2402.08821, 2024 - arxiv.org
Existing decentralized algorithms usually require knowledge of problem parameters for
updating local iterates. For example, the hyperparameters (such as learning rate) usually …

Decentralized Stochastic Optimization With Pairwise Constraints and Variance Reduction

F Han, X Cao, Y Gong - IEEE Transactions on Signal …, 2024 - ieeexplore.ieee.org
This paper focuses on minimizing the decentralized finite-sum optimization over a network,
where each pair of neighboring agents is associated with a nonlinear proximity constraint …

Fully First-Order Methods for Decentralized Bilevel Optimization

X Wang, X Chen, S Ma, T Zhang - arxiv preprint arxiv:2410.19319, 2024 - arxiv.org
This paper focuses on decentralized stochastic bilevel optimization (DSBO) where agents
only communicate with their neighbors. We propose Decentralized Stochastic Gradient …

Decentralized Stochastic Subgradient Methods for Nonsmooth Nonconvex Optimization

S Zhang, N **ao, X Liu - arxiv preprint arxiv:2403.11565, 2024 - arxiv.org
In this paper, we concentrate on decentralized optimization problems with nonconvex and
nonsmooth objective functions, especially on the decentralized training of nonsmooth neural …

Distributed Normal Map-based Stochastic Proximal Gradient Methods over Networks

K Huang, S Pu, A Nedić - arxiv preprint arxiv:2412.13054, 2024 - arxiv.org
Consider $ n $ agents connected over a network collaborate to minimize the average of their
local cost functions combined with a common nonsmooth function. This paper introduces a …