Privacy risk in machine learning: Analyzing the connection to overfitting

S Yeom, I Giacomelli, M Fredrikson… - 2018 IEEE 31st …, 2018 - ieeexplore.ieee.org
Machine learning algorithms, when applied to sensitive data, pose a distinct threat to
privacy. A growing body of prior work demonstrates that models produced by these …

Stolen memories: Leveraging model memorization for calibrated {White-Box} membership inference

K Leino, M Fredrikson - 29th USENIX security symposium (USENIX …, 2020 - usenix.org
Membership inference (MI) attacks exploit the fact that machine learning algorithms
sometimes leak information about their training data through the learned model. In this work …

Differentially private empirical risk minimization revisited: Faster and more general

D Wang, M Ye, J Xu - Advances in Neural Information …, 2017 - proceedings.neurips.cc
In this paper we study differentially private Empirical Risk Minimization (ERM) in different
settings. For smooth (strongly) convex loss function with or without (non)-smooth …

Towards practical differentially private convex optimization

R Iyengar, JP Near, D Song, O Thakkar… - … IEEE symposium on …, 2019 - ieeexplore.ieee.org
Building useful predictive models often involves learning from sensitive data. Training
models with differential privacy can guarantee the privacy of such sensitive data. For convex …

Differential privacy in the wild: A tutorial on current practices & open challenges

A Machanavajjhala, X He, M Hay - Proceedings of the 2017 ACM …, 2017 - dl.acm.org
Differential privacy has emerged as an important standard for privacy preserving
computation over databases containing sensitive information about individuals. Research …

[HTML][HTML] Safeguarding cross-silo federated learning with local differential privacy

C Wang, X Wu, G Liu, T Deng, K Peng… - Digital Communications …, 2022 - Elsevier
Federated Learning (FL) is a new computing paradigm in privacy-preserving Machine
Learning (ML), where the ML model is trained in a decentralized manner by the clients …

Differential privacy and federal data releases

JP Reiter - Annual review of statistics and its application, 2019 - annualreviews.org
Federal statistics agencies strive to release data products that are informative for many
purposes, yet also protect the privacy and confidentiality of data subjects' identities and …

Overfitting, robustness, and malicious algorithms: A study of potential causes of privacy risk in machine learning

S Yeom, I Giacomelli, A Menaged… - Journal of …, 2020 - content.iospress.com
Abstract Machine learning algorithms, when applied to sensitive data, pose a distinct threat
to privacy. A growing body of prior work demonstrates that models produced by these …

Differentially private significance tests for regression coefficients

AF Barrientos, JP Reiter… - … of Computational and …, 2019 - Taylor & Francis
Many data producers seek to provide users access to confidential data without unduly
compromising data subjects' privacy and confidentiality. One general strategy is to require …

Alleviating privacy attacks via causal learning

S Tople, A Sharma, A Nori - International Conference on …, 2020 - proceedings.mlr.press
Abstract Machine learning models, especially deep neural networks are known to be
susceptible to privacy attacks such as membership inference where an adversary can detect …