Defenses to membership inference attacks: A survey
Machine learning (ML) has gained widespread adoption in a variety of fields, including
computer vision and natural language processing. However, ML models are vulnerable to …
computer vision and natural language processing. However, ML models are vulnerable to …
Analyzing and defending against membership inference attacks in natural language processing classification
The risk posed by Membership Inference Attack (MIA) to deep learning models for Computer
Vision (CV) tasks is well known, but MIA has not been addressed or explored fully in the …
Vision (CV) tasks is well known, but MIA has not been addressed or explored fully in the …
{Inf2Guard}: An {Information-Theoretic} Framework for Learning {Privacy-Preserving} Representations against Inference Attacks
SL Noorbakhsh, B Zhang, Y Hong… - 33rd USENIX Security …, 2024 - usenix.org
Machine learning (ML) is vulnerable to inference (eg, membership inference, property
inference, and data reconstruction) attacks that aim to infer the private information of training …
inference, and data reconstruction) attacks that aim to infer the private information of training …
Membership inference attacks via spatial projection-based relative information loss in MLaaS
Abstract Machine Learning as a Service (MLaaS) has significantly advanced data-driven
decision-making and the development of intelligent applications. However, the privacy risks …
decision-making and the development of intelligent applications. However, the privacy risks …
[PDF][PDF] Inf2Guard: An Information-Theoretic Framework for Learning Privacy-Preserving Representations against Inference Attacks
Abstract Machine learning (ML) is vulnerable to inference (eg, membership inference,
property inference, and data reconstruction) attacks that aim to infer the private information …
property inference, and data reconstruction) attacks that aim to infer the private information …
Learning Robust and Privacy-Preserving Representations via Information Theory
Machine learning models are vulnerable to both security attacks (eg, adversarial examples)
and privacy attacks (eg, private attribute inference). We take the first step to mitigate both the …
and privacy attacks (eg, private attribute inference). We take the first step to mitigate both the …
Crafting Machine Learning Defenses against Adversaries
W Park - 2023 - deepblue.lib.umich.edu
Machine learning systems are becoming widely adopted and ubiquitous. Not only are there
a growth of products in which machine learning is at their core like autonomous vehicles, but …
a growth of products in which machine learning is at their core like autonomous vehicles, but …