Dispfl: Towards communication-efficient personalized federated learning via decentralized sparse training
Personalized federated learning is proposed to handle the data heterogeneity problem
amongst clients by learning dedicated tailored local models for each user. However, existing …
amongst clients by learning dedicated tailored local models for each user. However, existing …
Rare gems: Finding lottery tickets at initialization
Large neural networks can be pruned to a small fraction of their original size, with little loss
in accuracy, by following a time-consuming" train, prune, re-train" approach. Frankle & …
in accuracy, by following a time-consuming" train, prune, re-train" approach. Frankle & …
Self-aware personalized federated learning
In the context of personalized federated learning (FL), the critical challenge is to balance
local model improvement and global model tuning when the personal and global objectives …
local model improvement and global model tuning when the personal and global objectives …
New metrics to evaluate the performance and fairness of personalized federated learning
In Federated Learning (FL), the clients learn a single global model (FedAvg) through a
central aggregator. In this setting, the non-IID distribution of the data across clients restricts …
central aggregator. In this setting, the non-IID distribution of the data across clients restricts …
Joint Optimization Algorithm of Training Delay and Energy Efficiency for Wireless Large-Scale Distributed Machine Learning Combined With Blockchain for 6G …
X Zhang, X Zhu - IEEE Internet of Things Journal, 2024 - ieeexplore.ieee.org
In 6G, the communication cost of large-scale distributed machine learning (DML) will be
much higher than the computing cost, which will become a bottleneck restricting the …
much higher than the computing cost, which will become a bottleneck restricting the …
One-Time Model Adaptation to Heterogeneous Clients: An Intra-Client and Inter-Image Attention Design
The mainstream workflow of image recognition applications is first training one global model
on the cloud for a wide range of classes and then serving numerous clients, each with …
on the cloud for a wide range of classes and then serving numerous clients, each with …
Leveraging Side Information for Communication-Efficient Federated Learning
The high communication cost of sending model updates from the clients to the server is a
significant bottleneck for scalable federated learning (FL). Among existing approaches, state …
significant bottleneck for scalable federated learning (FL). Among existing approaches, state …