How to dp-fy ml: A practical guide to machine learning with differential privacy

N Ponomareva, H Hazimeh, A Kurakin, Z Xu… - Journal of Artificial …, 2023 - jair.org
Abstract Machine Learning (ML) models are ubiquitous in real-world applications and are a
constant focus of research. Modern ML models have become more complex, deeper, and …

A field guide to federated optimization

J Wang, Z Charles, Z Xu, G Joshi, HB McMahan… - arxiv preprint arxiv …, 2021 - arxiv.org
Federated learning and analytics are a distributed approach for collaboratively learning
models (or statistics) from decentralized data, motivated by and designed for privacy …

Federated learning with buffered asynchronous aggregation

J Nguyen, K Malik, H Zhan… - International …, 2022 - proceedings.mlr.press
Scalability and privacy are two critical concerns for cross-device federated learning (FL)
systems. In this work, we identify that synchronous FL–cannot scale efficiently beyond a few …

Secure single-server aggregation with (poly) logarithmic overhead

JH Bell, KA Bonawitz, A Gascón, T Lepoint… - Proceedings of the …, 2020 - dl.acm.org
Secure aggregation is a cryptographic primitive that enables a server to learn the sum of the
vector inputs of many clients. Bonawitz et al.(CCS 2017) presented a construction that incurs …

Shuffled model of differential privacy in federated learning

A Girgis, D Data, S Diggavi… - International …, 2021 - proceedings.mlr.press
We consider a distributed empirical risk minimization (ERM) optimization problem with
communication efficiency and privacy requirements, motivated by the federated learning …

Practical and private (deep) learning without sampling or shuffling

P Kairouz, B McMahan, S Song… - International …, 2021 - proceedings.mlr.press
We consider training models with differential privacy (DP) using mini-batch gradients. The
existing state-of-the-art, Differentially Private Stochastic Gradient Descent (DP-SGD) …

Hiding among the clones: A simple and nearly optimal analysis of privacy amplification by shuffling

V Feldman, A McMillan, K Talwar - 2021 IEEE 62nd Annual …, 2022 - ieeexplore.ieee.org
Recent work of Erlingsson, Feldman, Mironov, Raghunathan, Talwar, and Thakurta 1
demonstrates that random shuffling amplifies differential privacy guarantees of locally …

On large-cohort training for federated learning

Z Charles, Z Garrett, Z Huo… - Advances in neural …, 2021 - proceedings.neurips.cc
Federated learning methods typically learn a model by iteratively sampling updates from a
population of clients. In this work, we explore how the number of clients sampled at each …

Breaking the communication-privacy-accuracy trilemma

WN Chen, P Kairouz, A Ozgur - Advances in Neural …, 2020 - proceedings.neurips.cc
Two major challenges in distributed learning and estimation are 1) preserving the privacy of
the local samples; and 2) communicating them efficiently to a central server, while achieving …

Optimal algorithms for mean estimation under local differential privacy

H Asi, V Feldman, K Talwar - International Conference on …, 2022 - proceedings.mlr.press
We study the problem of mean estimation of $\ell_2 $-bounded vectors under the constraint
of local differential privacy. While the literature has a variety of algorithms that achieve the …