The evolution of distributed systems for graph neural networks and their origin in graph processing and deep learning: A survey

J Vatter, R Mayer, HA Jacobsen - ACM Computing Surveys, 2023 - dl.acm.org
Graph neural networks (GNNs) are an emerging research field. This specialized deep
neural network architecture is capable of processing graph structured data and bridges the …

A field guide to federated optimization

J Wang, Z Charles, Z Xu, G Joshi, HB McMahan… - arxiv preprint arxiv …, 2021 - arxiv.org
Federated learning and analytics are a distributed approach for collaboratively learning
models (or statistics) from decentralized data, motivated by and designed for privacy …

Federated learning with buffered asynchronous aggregation

J Nguyen, K Malik, H Zhan… - International …, 2022 - proceedings.mlr.press
Scalability and privacy are two critical concerns for cross-device federated learning (FL)
systems. In this work, we identify that synchronous FL–cannot scale efficiently beyond a few …

Advances and open problems in federated learning

P Kairouz, HB McMahan, B Avent… - … and trends® in …, 2021 - nowpublishers.com
Federated learning (FL) is a machine learning setting where many clients (eg, mobile
devices or whole organizations) collaboratively train a model under the orchestration of a …

Freelb: Enhanced adversarial training for natural language understanding

C Zhu, Y Cheng, Z Gan, S Sun, T Goldstein… - arxiv preprint arxiv …, 2019 - arxiv.org
Adversarial training, which minimizes the maximal risk for label-preserving input
perturbations, has proved to be effective for improving the generalization of language …

Machine learning at the wireless edge: Distributed stochastic gradient descent over-the-air

MM Amiri, D Gündüz - IEEE Transactions on Signal Processing, 2020 - ieeexplore.ieee.org
We study federated machine learning (ML) at the wireless edge, where power-and
bandwidth-limited wireless devices with local datasets carry out distributed stochastic …

Sharper convergence guarantees for asynchronous SGD for distributed and federated learning

A Koloskova, SU Stich, M Jaggi - Advances in Neural …, 2022 - proceedings.neurips.cc
We study the asynchronous stochastic gradient descent algorithm, for distributed training
over $ n $ workers that might be heterogeneous. In this algorithm, workers compute …

SAFA: A semi-asynchronous protocol for fast federated learning with low overhead

W Wu, L He, W Lin, R Mao, C Maple… - IEEE Transactions on …, 2020 - ieeexplore.ieee.org
Federated learning (FL) has attracted increasing attention as a promising approach to
driving a vast number of end devices with artificial intelligence. However, it is very …

Cooperative SGD: A unified framework for the design and analysis of communication-efficient SGD algorithms

J Wang, G Joshi - arxiv preprint arxiv:1808.07576, 2018 - arxiv.org
Communication-efficient SGD algorithms, which allow nodes to perform local updates and
periodically synchronize local models, are highly effective in improving the speed and …

Cooperative SGD: A unified framework for the design and analysis of local-update SGD algorithms

J Wang, G Joshi - Journal of Machine Learning Research, 2021 - jmlr.org
When training machine learning models using stochastic gradient descent (SGD) with a
large number of nodes or massive edge devices, the communication cost of synchronizing …