Extracting training data from large language models

N Carlini, F Tramer, E Wallace, M Jagielski… - 30th USENIX Security …, 2021 - usenix.org
It has become common to publish large (billion parameter) language models that have been
trained on private datasets. This paper demonstrates that in such settings, an adversary can …

Auditing differentially private machine learning: How private is private SGD?

M Jagielski, J Ullman, A Oprea - Advances in Neural …, 2020 - proceedings.neurips.cc
Abstract We investigate whether Differentially Private SGD offers better privacy in practice
than what is guaranteed by its state-of-the-art analysis. We do so via novel data poisoning …

Dp-cgan: Differentially private synthetic data and label generation

R Torkzadehmahani, P Kairouz… - Proceedings of the …, 2019 - openaccess.thecvf.com
Abstract Generative Adversarial Networks (GANs) are one of the well-known models to
generate synthetic data including images, especially for research communities that cannot …

Privacy at scale: Local differential privacy in practice

G Cormode, S Jha, T Kulkarni, N Li… - Proceedings of the …, 2018 - dl.acm.org
Local differential privacy (LDP), where users randomly perturb their inputs to provide
plausible deniability of their data without the need for a trusted party, has been adopted …

Practical locally private heavy hitters

R Bassily, K Nissim, U Stemmer… - Advances in Neural …, 2017 - proceedings.neurips.cc
We present new practical local differentially private heavy hitters algorithms achieving
optimal or near-optimal worst-case error--TreeHist and Bitstogram. In both algorithms, server …

Locally differentially private frequent itemset mining

T Wang, N Li, S Jha - 2018 IEEE Symposium on Security and …, 2018 - ieeexplore.ieee.org
The notion of Local Differential Privacy (LDP) enables users to respond to sensitive
questions while preserving their privacy. The basic LDP frequent oracle (FO) protocol …

Locally differentially private analysis of graph statistics

J Imola, T Murakami, K Chaudhuri - 30th USENIX security symposium …, 2021 - usenix.org
Differentially private analysis of graphs is widely used for releasing statistics from sensitive
graphs while still preserving user privacy. Most existing algorithms however are in a …

Heavy hitters and the structure of local privacy

M Bun, J Nelson, U Stemmer - ACM Transactions on Algorithms (TALG), 2019 - dl.acm.org
We present a new locally differentially private algorithm for the heavy hitters problem that
achieves optimal worst-case error as a function of all standardly considered parameters …

Locally private graph neural networks

S Sajadmanesh, D Gatica-Perez - … of the 2021 ACM SIGSAC conference …, 2021 - dl.acm.org
Graph Neural Networks (GNNs) have demonstrated superior performance in learning node
representations for various graph inference tasks. However, learning over graph data can …

Privacy-and utility-preserving textual analysis via calibrated multivariate perturbations

O Feyisetan, B Balle, T Drake, T Diethe - … on web search and data mining, 2020 - dl.acm.org
Accurately learning from user data while providing quantifiable privacy guarantees provides
an opportunity to build better ML models while maintaining user trust. This paper presents a …