How to dp-fy ml: A practical guide to machine learning with differential privacy
Abstract Machine Learning (ML) models are ubiquitous in real-world applications and are a
constant focus of research. Modern ML models have become more complex, deeper, and …
constant focus of research. Modern ML models have become more complex, deeper, and …
A field guide to federated optimization
Federated learning and analytics are a distributed approach for collaboratively learning
models (or statistics) from decentralized data, motivated by and designed for privacy …
models (or statistics) from decentralized data, motivated by and designed for privacy …
Federated learning with buffered asynchronous aggregation
Scalability and privacy are two critical concerns for cross-device federated learning (FL)
systems. In this work, we identify that synchronous FL–cannot scale efficiently beyond a few …
systems. In this work, we identify that synchronous FL–cannot scale efficiently beyond a few …
Secure single-server aggregation with (poly) logarithmic overhead
Secure aggregation is a cryptographic primitive that enables a server to learn the sum of the
vector inputs of many clients. Bonawitz et al.(CCS 2017) presented a construction that incurs …
vector inputs of many clients. Bonawitz et al.(CCS 2017) presented a construction that incurs …
Shuffled model of differential privacy in federated learning
We consider a distributed empirical risk minimization (ERM) optimization problem with
communication efficiency and privacy requirements, motivated by the federated learning …
communication efficiency and privacy requirements, motivated by the federated learning …
Practical and private (deep) learning without sampling or shuffling
We consider training models with differential privacy (DP) using mini-batch gradients. The
existing state-of-the-art, Differentially Private Stochastic Gradient Descent (DP-SGD) …
existing state-of-the-art, Differentially Private Stochastic Gradient Descent (DP-SGD) …
Hiding among the clones: A simple and nearly optimal analysis of privacy amplification by shuffling
Recent work of Erlingsson, Feldman, Mironov, Raghunathan, Talwar, and Thakurta 1
demonstrates that random shuffling amplifies differential privacy guarantees of locally …
demonstrates that random shuffling amplifies differential privacy guarantees of locally …
On large-cohort training for federated learning
Federated learning methods typically learn a model by iteratively sampling updates from a
population of clients. In this work, we explore how the number of clients sampled at each …
population of clients. In this work, we explore how the number of clients sampled at each …
Breaking the communication-privacy-accuracy trilemma
Two major challenges in distributed learning and estimation are 1) preserving the privacy of
the local samples; and 2) communicating them efficiently to a central server, while achieving …
the local samples; and 2) communicating them efficiently to a central server, while achieving …
Optimal algorithms for mean estimation under local differential privacy
We study the problem of mean estimation of $\ell_2 $-bounded vectors under the constraint
of local differential privacy. While the literature has a variety of algorithms that achieve the …
of local differential privacy. While the literature has a variety of algorithms that achieve the …