Dual-personalizing adapter for federated foundation models

G Long, T Shen, J Jiang… - Advances in Neural …, 2025 - proceedings.neurips.cc
Recently, foundation models, particularly large language models (LLMs), have
demonstrated an impressive ability to adapt to various tasks by fine-tuning diverse …

Fedllm-bench: Realistic benchmarks for federated learning of large language models

R Ye, R Ge, X Zhu, J Chai, D Yaxin… - Advances in …, 2025 - proceedings.neurips.cc
Federated learning has enabled multiple parties to collaboratively train large language
models without directly sharing their data (FedLLM). Following this training paradigm, the …

Federated large language models: Current progress and future directions

Y Yao, J Zhang, J Wu, C Huang, Y **a, T Yu… - arxiv preprint arxiv …, 2024 - arxiv.org
Large language models are rapidly gaining popularity and have been widely adopted in real-
world applications. While the quality of training data is essential, privacy concerns arise …

Fine-tuning large language models with user-level differential privacy

Z Charles, A Ganesh, R McKenna… - arxiv preprint arxiv …, 2024 - arxiv.org
We investigate practical and scalable algorithms for training large language models (LLMs)
with user-level differential privacy (DP) in order to provably safeguard all the examples …

User inference attacks on large language models

N Kandpal, K Pillutla, A Oprea, P Kairouz… - arxiv preprint arxiv …, 2023 - arxiv.org
Fine-tuning is a common and effective method for tailoring large language models (LLMs) to
specialized tasks and applications. In this paper, we study the privacy implications of fine …

Towards Federated Large Language Models: Motivations, Methods, and Future Directions

Y Cheng, W Zhang, Z Zhang, C Zhang… - … Surveys & Tutorials, 2024 - ieeexplore.ieee.org
Large Language Models (LLMs), such as LLaMA and GPT-4, have transformed the
paradigm of natural language comprehension and generation. Despite their impressive …

Pre-text: Training language models on private federated data in the age of llms

C Hou, A Shrivastava, H Zhan, R Conway, T Le… - arxiv preprint arxiv …, 2024 - arxiv.org
On-device training is currently the most common approach for training machine learning
(ML) models on private, distributed user data. Despite this, on-device training has several …

Profit: Benchmarking personalization and robustness trade-off in federated prompt tuning

L Collins, S Wu, S Oh, KC Sim - arxiv preprint arxiv:2310.04627, 2023 - arxiv.org
In many applications of federated learning (FL), clients desire models that are personalized
using their local data, yet are also robust in the sense that they retain general global …

Worldwide federated training of language models

A Iacob, L Sani, B Marino, P Aleksandrov… - arxiv preprint arxiv …, 2024 - arxiv.org
The reliance of language model training on massive amounts of computation and vast
datasets scraped from potentially low-quality, copyrighted, or sensitive data has come into …

: simulation framework for accelerating research in Private Federated Learning

F Granqvist, C Song, Á Cahill… - Advances in …, 2025 - proceedings.neurips.cc
Federated learning (FL) is an emerging machine learning (ML) training paradigm where
clients own their data and collaborate to train a global model, without revealing any data to …