How to dp-fy ml: A practical guide to machine learning with differential privacy

N Ponomareva, H Hazimeh, A Kurakin, Z Xu… - Journal of Artificial …, 2023 - jair.org
Abstract Machine Learning (ML) models are ubiquitous in real-world applications and are a
constant focus of research. Modern ML models have become more complex, deeper, and …

Differentially private natural language models: Recent advances and future directions

L Hu, I Habernal, L Shen, D Wang - arxiv preprint arxiv:2301.09112, 2023 - arxiv.org
Recent developments in deep learning have led to great success in various natural
language processing (NLP) tasks. However, these applications may involve data that …

Federated learning of gboard language models with differential privacy

Z Xu, Y Zhang, G Andrew, CA Choquette-Choo… - arxiv preprint arxiv …, 2023 - arxiv.org
We train language models (LMs) with federated learning (FL) and differential privacy (DP) in
the Google Keyboard (Gboard). We apply the DP-Follow-the-Regularized-Leader (DP …

Privacy side channels in machine learning systems

E Debenedetti, G Severi, N Carlini… - 33rd USENIX Security …, 2024 - usenix.org
Most current approaches for protecting privacy in machine learning (ML) assume that
models exist in a vacuum. Yet, in reality, these models are part of larger systems that include …

(Amplified) Banded Matrix Factorization: A unified approach to private training

CA Choquette-Choo, A Ganesh… - Advances in …, 2023 - proceedings.neurips.cc
Matrix factorization (MF) mechanisms for differential privacy (DP) have substantially
improved the state-of-the-art in privacy-utility-computation tradeoffs for ML applications in a …

On the convergence of federated averaging with cyclic client participation

YJ Cho, P Sharma, G Joshi, Z Xu… - International …, 2023 - proceedings.mlr.press
Abstract Federated Averaging (FedAvg) and its variants are the most popular optimization
algorithms in federated learning (FL). Previous convergence analyses of FedAvg either …

Constant matters: Fine-grained error bound on differentially private continual observation

H Fichtenberger, M Henzinger… - … on Machine Learning, 2023 - proceedings.mlr.press
We study fine-grained error bounds for differentially private algorithms for counting under
continual observation. Our main insight is that the matrix mechanism when using lower …

Fine-tuning large language models with user-level differential privacy

Z Charles, A Ganesh, R McKenna… - arxiv preprint arxiv …, 2024 - arxiv.org
We investigate practical and scalable algorithms for training large language models (LLMs)
with user-level differential privacy (DP) in order to provably safeguard all the examples …

One-shot empirical privacy estimation for federated learning

G Andrew, P Kairouz, S Oh, A Oprea… - arxiv preprint arxiv …, 2023 - arxiv.org
Privacy estimation techniques for differentially private (DP) algorithms are useful for
comparing against analytical bounds, or to empirically measure privacy loss in settings …

Efficient and near-optimal noise generation for streaming differential privacy

KD Dvijotham, HB McMahan, K Pillutla… - 2024 IEEE 65th …, 2024 - ieeexplore.ieee.org
In the task of differentially private (DP) continual counting, we receive a stream of increments
and our goal is to output an approximate running total of these increments, without revealing …