Privacy risk in machine learning: Analyzing the connection to overfitting
S Yeom, I Giacomelli, M Fredrikson… - 2018 IEEE 31st …, 2018 - ieeexplore.ieee.org
Machine learning algorithms, when applied to sensitive data, pose a distinct threat to
privacy. A growing body of prior work demonstrates that models produced by these …
privacy. A growing body of prior work demonstrates that models produced by these …
Stolen memories: Leveraging model memorization for calibrated {White-Box} membership inference
K Leino, M Fredrikson - 29th USENIX security symposium (USENIX …, 2020 - usenix.org
Membership inference (MI) attacks exploit the fact that machine learning algorithms
sometimes leak information about their training data through the learned model. In this work …
sometimes leak information about their training data through the learned model. In this work …
Differentially private empirical risk minimization revisited: Faster and more general
In this paper we study differentially private Empirical Risk Minimization (ERM) in different
settings. For smooth (strongly) convex loss function with or without (non)-smooth …
settings. For smooth (strongly) convex loss function with or without (non)-smooth …
Towards practical differentially private convex optimization
Building useful predictive models often involves learning from sensitive data. Training
models with differential privacy can guarantee the privacy of such sensitive data. For convex …
models with differential privacy can guarantee the privacy of such sensitive data. For convex …
Differential privacy in the wild: A tutorial on current practices & open challenges
Differential privacy has emerged as an important standard for privacy preserving
computation over databases containing sensitive information about individuals. Research …
computation over databases containing sensitive information about individuals. Research …
[HTML][HTML] Safeguarding cross-silo federated learning with local differential privacy
Federated Learning (FL) is a new computing paradigm in privacy-preserving Machine
Learning (ML), where the ML model is trained in a decentralized manner by the clients …
Learning (ML), where the ML model is trained in a decentralized manner by the clients …
Differential privacy and federal data releases
JP Reiter - Annual review of statistics and its application, 2019 - annualreviews.org
Federal statistics agencies strive to release data products that are informative for many
purposes, yet also protect the privacy and confidentiality of data subjects' identities and …
purposes, yet also protect the privacy and confidentiality of data subjects' identities and …
Overfitting, robustness, and malicious algorithms: A study of potential causes of privacy risk in machine learning
S Yeom, I Giacomelli, A Menaged… - Journal of …, 2020 - content.iospress.com
Abstract Machine learning algorithms, when applied to sensitive data, pose a distinct threat
to privacy. A growing body of prior work demonstrates that models produced by these …
to privacy. A growing body of prior work demonstrates that models produced by these …
Differentially private significance tests for regression coefficients
AF Barrientos, JP Reiter… - … of Computational and …, 2019 - Taylor & Francis
Many data producers seek to provide users access to confidential data without unduly
compromising data subjects' privacy and confidentiality. One general strategy is to require …
compromising data subjects' privacy and confidentiality. One general strategy is to require …
Alleviating privacy attacks via causal learning
Abstract Machine learning models, especially deep neural networks are known to be
susceptible to privacy attacks such as membership inference where an adversary can detect …
susceptible to privacy attacks such as membership inference where an adversary can detect …