Holistic network virtualization and pervasive network intelligence for 6G

X Shen, J Gao, W Wu, M Li, C Zhou… - … Surveys & Tutorials, 2021 - ieeexplore.ieee.org
In this tutorial paper, we look into the evolution and prospect of network architecture and
propose a novel conceptual architecture for the 6th generation (6G) networks. The proposed …

Communication-efficient distributed deep learning: A comprehensive survey

Z Tang, S Shi, W Wang, B Li, X Chu - arxiv preprint arxiv:2003.06307, 2020 - arxiv.org
Distributed deep learning (DL) has become prevalent in recent years to reduce training time
by leveraging multiple computing devices (eg, GPUs/TPUs) due to larger models and …

The right to be forgotten in federated learning: An efficient realization with rapid retraining

Y Liu, L Xu, X Yuan, C Wang, B Li - IEEE INFOCOM 2022-IEEE …, 2022 - ieeexplore.ieee.org
In Machine Learning, the emergence of the right to be forgotten gave birth to a paradigm
named machine unlearning, which enables data holders to proactively erase their data from …

Adaptive gradient sparsification for efficient federated learning: An online learning approach

P Han, S Wang, KK Leung - 2020 IEEE 40th international …, 2020 - ieeexplore.ieee.org
Federated learning (FL) is an emerging technique for training machine learning models
using geographically dispersed data collected by local entities. It includes local computation …

Gradient driven rewards to guarantee fairness in collaborative machine learning

X Xu, L Lyu, X Ma, C Miao, CS Foo… - Advances in Neural …, 2021 - proceedings.neurips.cc
In collaborative machine learning (CML), multiple agents pool their resources (eg, data)
together for a common learning task. In realistic CML settings where the agents are self …

Communication-efficient federated learning with adaptive parameter freezing

C Chen, H Xu, W Wang, B Li, B Li… - 2021 IEEE 41st …, 2021 - ieeexplore.ieee.org
Federated learning allows edge devices to collaboratively train a global model by
synchronizing their local updates without sharing private data. Yet, with limited network …

Adaptive batch size for federated learning in resource-constrained edge computing

Z Ma, Y Xu, H Xu, Z Meng, L Huang… - IEEE Transactions on …, 2021 - ieeexplore.ieee.org
The emerging Federated Learning (FL) enables IoT devices to collaboratively learn a
shared model based on their local datasets. However, due to end devices' heterogeneity, it …

Toward communication-efficient federated learning in the Internet of Things with edge computing

H Sun, S Li, FR Yu, Q Qi, J Wang… - IEEE Internet of Things …, 2020 - ieeexplore.ieee.org
Federated learning is an emerging concept that trains the machine learning models with the
local distributed data sets, without sending the raw data to the data center. But, in the …

MG-WFBP: Efficient data communication for distributed synchronous SGD algorithms

S Shi, X Chu, B Li - IEEE INFOCOM 2019-IEEE Conference on …, 2019 - ieeexplore.ieee.org
Distributed synchronous stochastic gradient descent has been widely used to train deep
neural networks on computer clusters. With the increase of computational power, network …

Preemptive all-reduce scheduling for expediting distributed DNN training

Y Bao, Y Peng, Y Chen, C Wu - IEEE INFOCOM 2020-IEEE …, 2020 - ieeexplore.ieee.org
Data-parallel training is widely used for scaling DNN training over large datasets, using the
parameter server or all-reduce architecture. Communication scheduling has been promising …