A survey on distributed machine learning

J Verbraeken, M Wolting, J Katzy… - Acm computing surveys …, 2020 - dl.acm.org
The demand for artificial intelligence has grown significantly over the past decade, and this
growth has been fueled by advances in machine learning techniques and the ability to …

[HTML][HTML] Federated learning for 6G: Applications, challenges, and opportunities

Z Yang, M Chen, KK Wong, HV Poor, S Cui - Engineering, 2022 - Elsevier
Standard machine-learning approaches involve the centralization of training data in a data
center, where centralized machine-learning algorithms can be applied for data analysis and …

Edge learning for B5G networks with distributed signal processing: Semantic communication, edge computing, and wireless sensing

W Xu, Z Yang, DWK Ng, M Levorato… - IEEE journal of …, 2023 - ieeexplore.ieee.org
To process and transfer large amounts of data in emerging wireless services, it has become
increasingly appealing to exploit distributed data communication and learning. Specifically …

Asynchronous online federated learning for edge devices with non-iid data

Y Chen, Y Ning, M Slawski… - 2020 IEEE International …, 2020 - ieeexplore.ieee.org
Federated learning (FL) is a machine learning paradigm where a shared central model is
learned across distributed devices while the training data remains on these devices …

Practical block-wise neural network architecture generation

Z Zhong, J Yan, W Wu, J Shao… - Proceedings of the IEEE …, 2018 - openaccess.thecvf.com
Convolutional neural networks have gained a remarkable success in computer vision.
However, most usable network architectures are hand-crafted and usually require expertise …

Fedrs: Federated learning with restricted softmax for label distribution non-iid data

XC Li, DC Zhan - Proceedings of the 27th ACM SIGKDD conference on …, 2021 - dl.acm.org
Federated Learning (FL) aims to generate a global shared model via collaborating
decentralized clients with privacy considerations. Unlike standard distributed optimization …

Communication efficient distributed machine learning with the parameter server

M Li, DG Andersen, AJ Smola… - Advances in Neural …, 2014 - proceedings.neurips.cc
This paper describes a third-generation parameter server framework for distributed machine
learning. This framework offers two relaxations to balance system performance and …

Enabling resource-efficient aiot system with cross-level optimization: A survey

S Liu, B Guo, C Fang, Z Wang, S Luo… - … Surveys & Tutorials, 2023 - ieeexplore.ieee.org
The emerging field of artificial intelligence of things (AIoT, AI+ IoT) is driven by the
widespread use of intelligent infrastructures and the impressive success of deep learning …

Byzantine machine learning: A primer

R Guerraoui, N Gupta, R Pinot - ACM Computing Surveys, 2024 - dl.acm.org
The problem of Byzantine resilience in distributed machine learning, aka Byzantine machine
learning, consists of designing distributed algorithms that can train an accurate model …

Asynchronous parallel stochastic gradient for nonconvex optimization

X Lian, Y Huang, Y Li, J Liu - Advances in neural …, 2015 - proceedings.neurips.cc
The asynchronous parallel implementations of stochastic gradient (SG) have been broadly
used in solving deep neural network and received many successes in practice recently …