Differentially private fine-tuning of language models

D Yu, S Naik, A Backurs, S Gopi, HA Inan… - ar**: Differentially private deep learning made easier and stronger
Z Bu, YX Wang, S Zha… - Advances in Neural …, 2023 - proceedings.neurips.cc
Per-example gradient clip** is a key algorithmic step that enables practical differential
private (DP) training for deep learning models. The choice of clip** threshold $ R …

Privacy-preserving in-context learning for large language models

T Wu, A Panda, JT Wang, P Mittal - arxiv preprint arxiv:2305.01639, 2023 - arxiv.org
In-context learning (ICL) is an important capability of Large Language Models (LLMs),
enabling these models to dynamically adapt based on specific, in-context exemplars …