Sociotechnical safeguards for genomic data privacy

Z Wan, JW Hazel, EW Clayton, Y Vorobeychik… - Nature Reviews …, 2022 - nature.com
Recent developments in a variety of sectors, including health care, research and the direct-
to-consumer industry, have led to a dramatic increase in the amount of genomic data that …

Privacy challenges and research opportunities for genomic data sharing

L Bonomi, Y Huang, L Ohno-Machado - Nature genetics, 2020 - nature.com
The sharing of genomic data holds great promise in advancing precision medicine and
providing personalized treatments and other types of interventions. However, these …

A survey of machine unlearning

TT Nguyen, TT Huynh, Z Ren, PL Nguyen… - arxiv preprint arxiv …, 2022 - arxiv.org
Today, computer systems hold large amounts of personal data. Yet while such an
abundance of data allows breakthroughs in artificial intelligence, and especially machine …

Comprehensive privacy analysis of deep learning: Passive and active white-box inference attacks against centralized and federated learning

M Nasr, R Shokri, A Houmansadr - 2019 IEEE symposium on …, 2019 - ieeexplore.ieee.org
Deep neural networks are susceptible to various inference attacks as they remember
information about their training data. We design white-box inference attacks to perform a …

Privacy risk in machine learning: Analyzing the connection to overfitting

S Yeom, I Giacomelli, M Fredrikson… - 2018 IEEE 31st …, 2018 - ieeexplore.ieee.org
Machine learning algorithms, when applied to sensitive data, pose a distinct threat to
privacy. A growing body of prior work demonstrates that models produced by these …

Machine learning with membership privacy using adversarial regularization

M Nasr, R Shokri, A Houmansadr - … of the 2018 ACM SIGSAC conference …, 2018 - dl.acm.org
Machine learning models leak significant amount of information about their training sets,
through their predictions. This is a serious privacy concern for the users of machine learning …

Stolen memories: Leveraging model memorization for calibrated {White-Box} membership inference

K Leino, M Fredrikson - 29th USENIX security symposium (USENIX …, 2020 - usenix.org
Membership inference (MI) attacks exploit the fact that machine learning algorithms
sometimes leak information about their training data through the learned model. In this work …

Model inversion attacks that exploit confidence information and basic countermeasures

M Fredrikson, S Jha, T Ristenpart - … of the 22nd ACM SIGSAC conference …, 2015 - dl.acm.org
Machine-learning (ML) algorithms are increasingly utilized in privacy-sensitive applications
such as predicting lifestyle choices, making medical diagnoses, and facial recognition. In a …

Sok: Secure aggregation based on cryptographic schemes for federated learning

M Mansouri, M Önen, WB Jaballah… - Proceedings on Privacy …, 2023 - petsymposium.org
Secure aggregation consists of computing the sum of data collected from multiple sources
without disclosing these individual inputs. Secure aggregation has been found useful for …

Towards making systems forget with machine unlearning

Y Cao, J Yang - 2015 IEEE symposium on security and privacy, 2015 - ieeexplore.ieee.org
Today's systems produce a rapidly exploding amount of data, and the data further derives
more data, forming a complex data propagation network that we call the data's lineage …