Privacy leakage on dnns: A survey of model inversion attacks and defenses
Deep Neural Networks (DNNs) have revolutionized various domains with their exceptional
performance across numerous applications. However, Model Inversion (MI) attacks, which …
performance across numerous applications. However, Model Inversion (MI) attacks, which …
Mibench: A comprehensive benchmark for model inversion attack and defense
Model Inversion (MI) attacks aim at leveraging the output information of target models to
reconstruct privacy-sensitive training data, raising widespread concerns on privacy threats of …
reconstruct privacy-sensitive training data, raising widespread concerns on privacy threats of …
Prediction Exposes Your Face: Black-box Model Inversion via Prediction Alignment
Abstract Model inversion (MI) attack reconstructs the private training data of a target model
given its output, posing a significant threat to deep learning models and data privacy. On …
given its output, posing a significant threat to deep learning models and data privacy. On …
Model Inversion Robustness: Can Transfer Learning Help?
Abstract Model Inversion (MI) attacks aim to reconstruct private training data by abusing
access to machine learning models. Contemporary MI attacks have achieved impressive …
access to machine learning models. Contemporary MI attacks have achieved impressive …
Deep Learning Model Inversion Attacks and Defenses: A Comprehensive Survey
The rapid adoption of deep learning in sensitive domains has brought tremendous benefits.
However, this widespread adoption has also given rise to serious vulnerabilities, particularly …
However, this widespread adoption has also given rise to serious vulnerabilities, particularly …
TinyML Security: Exploring Vulnerabilities in Resource-Constrained Machine Learning Systems
J Huckelberry, Y Zhang, A Sansone, J Mickens… - arxiv preprint arxiv …, 2024 - arxiv.org
Tiny Machine Learning (TinyML) systems, which enable machine learning inference on
highly resource-constrained devices, are transforming edge computing but encounter …
highly resource-constrained devices, are transforming edge computing but encounter …
Model Inversion Attacks: A Survey of Approaches and Countermeasures
The success of deep neural networks has driven numerous research studies and
applications from Euclidean to non-Euclidean data. However, there are increasing concerns …
applications from Euclidean to non-Euclidean data. However, there are increasing concerns …
Generative AI model privacy: a survey
The rapid progress of generative AI models has yielded substantial breakthroughs in AI,
facilitating the generation of realistic synthetic data across various modalities. However …
facilitating the generation of realistic synthetic data across various modalities. However …
DORY: Deliberative Prompt Recovery for LLM
Prompt recovery in large language models (LLMs) is crucial for understanding how LLMs
work and addressing concerns regarding privacy, copyright, etc. The trend towards …
work and addressing concerns regarding privacy, copyright, etc. The trend towards …
Outlier-Oriented Poisoning Attack: A Grey-box Approach to Disturb Decision Boundaries by Perturbing Outliers in Multiclass Learning
Poisoning attacks are a primary threat to machine learning models, aiming to compromise
their performance and reliability by manipulating training datasets. This paper introduces a …
their performance and reliability by manipulating training datasets. This paper introduces a …