Large language models can be strong differentially private learners

X Li, F Tramer, P Liang, T Hashimoto - ar**: Differentially private deep learning made easier and stronger
Z Bu, YX Wang, S Zha… - Advances in Neural …, 2024 - proceedings.neurips.cc
Per-example gradient clip** is a key algorithmic step that enables practical differential
private (DP) training for deep learning models. The choice of clip** threshold $ R …

Privacy-preserving prompt tuning for large language model services

Y Li, Z Tan, Y Liu - arxiv preprint arxiv:2305.06212, 2023 - arxiv.org
Prompt tuning provides an efficient way for users to customize Large Language Models
(LLMs) with their private data in the emerging LLM service scenario. However, the sensitive …

Federated large language model: A position paper

C Chen, X Feng, J Zhou, J Yin, X Zheng - arxiv e-prints, 2023 - ui.adsabs.harvard.edu
Large scale language models (LLM) have received significant attention and found diverse
applications across various domains, but their development encounters challenges in real …

Preserving privacy in large language models: A survey on current threats and solutions

M Miranda, ES Ruzzetti, A Santilli, FM Zanzotto… - arxiv preprint arxiv …, 2024 - arxiv.org
Large Language Models (LLMs) represent a significant advancement in artificial
intelligence, finding applications across various domains. However, their reliance on …