Large language models can be strong differentially private learners

X Li, F Tramer, P Liang, T Hashimoto - ar** in private sgd: A geometric perspective
X Chen, SZ Wu, M Hong - Advances in Neural Information …, 2020‏ - proceedings.neurips.cc
Deep learning models are increasingly popular in many machine learning applications
where the training data may contain sensitive information. To provide formal and rigorous …

Fast-adapting and privacy-preserving federated recommender system

Q Wang, H Yin, T Chen, J Yu, A Zhou, X Zhang - The VLDB Journal, 2022‏ - Springer
In the mobile Internet era, recommender systems have become an irreplaceable tool to help
users discover useful items, thus alleviating the information overload problem. Recent …

Deep learning with gaussian differential privacy

Z Bu, J Dong, Q Long, WJ Su - Harvard data science review, 2020‏ - pmc.ncbi.nlm.nih.gov
Deep learning models are often trained on datasets that contain sensitive information such
as individuals' shop** transactions, personal contacts, and medical records. An …

On privacy and personalization in cross-silo federated learning

K Liu, S Hu, SZ Wu, V Smith - Advances in neural …, 2022‏ - proceedings.neurips.cc
While the application of differential privacy (DP) has been well-studied in cross-device
federated learning (FL), there is a lack of work considering DP and its implications for cross …

Automatic clip**: Differentially private deep learning made easier and stronger

Z Bu, YX Wang, S Zha… - Advances in Neural …, 2023‏ - proceedings.neurips.cc
Per-example gradient clip** is a key algorithmic step that enables practical differential
private (DP) training for deep learning models. The choice of clip** threshold $ R …