Large language models can be strong differentially private learners

X Li, F Tramer, P Liang, T Hashimoto - ar** Zhang, zhip. zhang@ northeastern. edu, Northeastern
University, Boston, MA, USA; Michelle Jia, michellj@ andrew. cmu. edu, Carnegie Mellon …

Exploring the limits of differentially private deep learning with group-wise clip**

J He, X Li, D Yu, H Zhang, J Kulkarni, YT Lee… - arxiv preprint arxiv …, 2022 - arxiv.org
Differentially private deep learning has recently witnessed advances in computational
efficiency and privacy-utility trade-off. We explore whether further improvements along the …

Identifying and mitigating privacy risks stemming from language models: A survey

V Smith, AS Shamsabadi, C Ashurst… - arxiv preprint arxiv …, 2023 - arxiv.org
Large Language Models (LLMs) have shown greatly enhanced performance in recent years,
attributed to increased size and extensive training data. This advancement has led to …

Differentially private decoding in large language models

J Majmudar, C Dupuy, C Peris, S Smaili… - arxiv preprint arxiv …, 2022 - arxiv.org
Recent large-scale natural language processing (NLP) systems use a pre-trained Large
Language Model (LLM) on massive and diverse corpora as a headstart. In practice, the pre …

Adversarial attacks and defenses for large language models (LLMs): methods, frameworks & challenges

P Kumar - International Journal of Multimedia Information …, 2024 - Springer
Large language models (LLMs) have exhibited remarkable efficacy and proficiency in a
wide array of NLP endeavors. Nevertheless, concerns are growing rapidly regarding the …

A customized text sanitization mechanism with differential privacy

H Chen, F Mo, Y Wang, C Chen, JY Nie… - arxiv preprint arxiv …, 2022 - arxiv.org
As privacy issues are receiving increasing attention within the Natural Language Processing
(NLP) community, numerous methods have been proposed to sanitize texts subject to …