An efficient framework for clustered federated learning

A Ghosh, J Chung, D Yin… - Advances in neural …, 2020 - proceedings.neurips.cc
We address the problem of Federated Learning (FL) where users are distributed and
partitioned into clusters. This setup captures settings where different groups of users have …

Advancements in federated learning: Models, methods, and privacy

H Chen, H Wang, Q Long, D **, Y Li - ACM Computing Surveys, 2024 - dl.acm.org
Federated learning (FL) is a promising technique for resolving the rising privacy and security
concerns. Its main ingredient is to cooperatively learn the model among the distributed …

An efficient framework for clustered federated learning

A Ghosh, J Chung, D Yin… - IEEE Transactions on …, 2022 - ieeexplore.ieee.org
We address the problem of federated learning (FL) where users are distributed and
partitioned into clusters. This setup captures settings where different groups of users have …

Exploiting heterogeneity in robust federated best-arm identification

A Mitra, H Hassani, G Pappas - arxiv preprint arxiv:2109.05700, 2021 - arxiv.org
We study a federated variant of the best-arm identification problem in stochastic multi-armed
bandits: a set of clients, each of whom can sample only a subset of the arms, collaborate via …

Byzantine Robustness and Partial Participation Can Be Achieved at Once: Just Clip Gradient Differences

G Malinovsky, P Richtárik, S Horváth… - arxiv preprint arxiv …, 2023 - arxiv.org
Distributed learning has emerged as a leading paradigm for training large machine learning
models. However, in real-world scenarios, participants may be unreliable or malicious …

Communication compression for byzantine robust learning: New efficient algorithms and improved rates

A Rammal, K Gruntkowska, N Fedin… - International …, 2024 - proceedings.mlr.press
Byzantine robustness is an essential feature of algorithms for certain distributed optimization
problems, typically encountered in collaborative/federated learning. These problems are …

Distributed Newton-type methods with communication compression and Bernoulli aggregation

R Islamov, X Qian, S Hanzely, M Safaryan… - … on Machine Learning …, 2023 - openreview.net
Despite their high computation and communication costs, Newton-type methods remain an
appealing option for distributed training due to their robustness against ill-conditioned …

Collaborative linear bandits with adversarial agents: Near-optimal regret bounds

A Mitra, A Adibi, GJ Pappas… - Advances in neural …, 2022 - proceedings.neurips.cc
We consider a linear stochastic bandit problem involving $ M $ agents that can collaborate
via a central server to minimize regret. A fraction $\alpha $ of these agents are adversarial …