Advances in asynchronous parallel and distributed optimization

M Assran, A Aytekin, HR Feyzmahdavian… - Proceedings of the …, 2020 - ieeexplore.ieee.org
Motivated by large-scale optimization problems arising in the context of machine learning,
there have been several advances in the study of asynchronous parallel and distributed …

Federated optimization: Distributed machine learning for on-device intelligence

J Konečný, HB McMahan, D Ramage… - arxiv preprint arxiv …, 2016 - arxiv.org
We introduce a new and increasingly relevant setting for distributed optimization in machine
learning, where the data defining the optimization are unevenly distributed over an …

Push–pull gradient methods for distributed optimization in networks

S Pu, W Shi, J Xu, A Nedić - IEEE Transactions on Automatic …, 2020 - ieeexplore.ieee.org
In this article, we focus on solving a distributed convex optimization problem in a network,
where each agent has its own convex cost function and the goal is to minimize the sum of …

Global convergence of ADMM in nonconvex nonsmooth optimization

Y Wang, W Yin, J Zeng - Journal of Scientific Computing, 2019 - Springer
In this paper, we analyze the convergence of the alternating direction method of multipliers
(ADMM) for minimizing a nonconvex and possibly nonsmooth objective function, ϕ (x_0 …

LAG: Lazily aggregated gradient for communication-efficient distributed learning

T Chen, G Giannakis, T Sun… - Advances in neural …, 2018 - proceedings.neurips.cc
This paper presents a new class of gradient methods for distributed machine learning that
adaptively skip the gradient calculations to learn with reduced communication and …

Vafl: a method of vertical asynchronous federated learning

T Chen, X **, Y Sun, W Yin - arxiv preprint arxiv:2007.06081, 2020 - arxiv.org
Horizontal Federated learning (FL) handles multi-client data that share the same set of
features, and vertical FL trains a better predictor that combine all the features from different …

FedBCD: A communication-efficient collaborative learning framework for distributed features

Y Liu, X Zhang, Y Kang, L Li, T Chen… - IEEE Transactions …, 2022 - ieeexplore.ieee.org
We introduce a novel federated learning framework allowing multiple parties having different
sets of attributes about the same user to jointly build models without exposing their raw data …

Slow and stale gradients can win the race: Error-runtime trade-offs in distributed SGD

S Dutta, G Joshi, S Ghosh, P Dube… - International …, 2018 - proceedings.mlr.press
Abstract Distributed Stochastic Gradient Descent (SGD) when run in a synchronous manner,
suffers from delays in waiting for the slowest learners (stragglers). Asynchronous methods …

Perturbed iterate analysis for asynchronous stochastic optimization

H Mania, X Pan, D Papailiopoulos, B Recht… - SIAM Journal on …, 2017 - SIAM
We introduce and analyze stochastic optimization methods where the input to each update
is perturbed by bounded noise. We show that this framework forms the basis of a unified …

Scanerf: Scalable bundle-adjusting neural radiance fields for large-scale scene rendering

X Wu, J Xu, X Zhang, H Bao, Q Huang, Y Shen… - ACM Transactions on …, 2023 - dl.acm.org
High-quality large-scale scene rendering requires a scalable representation and accurate
camera poses. This research combines tile-based hybrid neural fields with parallel …