Privacy leakage on dnns: A survey of model inversion attacks and defenses

H Fang, Y Qiu, H Yu, W Yu, J Kong, B Chong… - arxiv preprint arxiv …, 2024 - arxiv.org
Deep Neural Networks (DNNs) have revolutionized various domains with their exceptional
performance across numerous applications. However, Model Inversion (MI) attacks, which …

Mibench: A comprehensive benchmark for model inversion attack and defense

Y Qiu, H Yu, H Fang, W Yu, B Chen, X Wang… - arxiv preprint arxiv …, 2024 - arxiv.org
Model Inversion (MI) attacks aim at leveraging the output information of target models to
reconstruct privacy-sensitive training data, raising widespread concerns on privacy threats of …

Prediction Exposes Your Face: Black-box Model Inversion via Prediction Alignment

Y Liu, W Zhang, D Wu, Z Lin, J Gu, W Wang - European Conference on …, 2024 - Springer
Abstract Model inversion (MI) attack reconstructs the private training data of a target model
given its output, posing a significant threat to deep learning models and data privacy. On …

Model Inversion Robustness: Can Transfer Learning Help?

ST Ho, KJ Hao, K Chandrasegaran… - Proceedings of the …, 2024 - openaccess.thecvf.com
Abstract Model Inversion (MI) attacks aim to reconstruct private training data by abusing
access to machine learning models. Contemporary MI attacks have achieved impressive …

Deep Learning Model Inversion Attacks and Defenses: A Comprehensive Survey

W Yang, S Wang, D Wu, T Cai, Y Zhu, S Wei… - arxiv preprint arxiv …, 2025 - arxiv.org
The rapid adoption of deep learning in sensitive domains has brought tremendous benefits.
However, this widespread adoption has also given rise to serious vulnerabilities, particularly …

TinyML Security: Exploring Vulnerabilities in Resource-Constrained Machine Learning Systems

J Huckelberry, Y Zhang, A Sansone, J Mickens… - arxiv preprint arxiv …, 2024 - arxiv.org
Tiny Machine Learning (TinyML) systems, which enable machine learning inference on
highly resource-constrained devices, are transforming edge computing but encounter …

Model Inversion Attacks: A Survey of Approaches and Countermeasures

Z Zhou, J Zhu, F Yu, X Li, X Peng, T Liu… - arxiv preprint arxiv …, 2024 - arxiv.org
The success of deep neural networks has driven numerous research studies and
applications from Euclidean to non-Euclidean data. However, there are increasing concerns …

Generative AI model privacy: a survey

Y Liu, J Huang, Y Li, D Wang, B **ao - Artificial Intelligence Review, 2025 - Springer
The rapid progress of generative AI models has yielded substantial breakthroughs in AI,
facilitating the generation of realistic synthetic data across various modalities. However …

DORY: Deliberative Prompt Recovery for LLM

L Gao, R Peng, Y Zhang, J Zhao - arxiv preprint arxiv:2405.20657, 2024 - arxiv.org
Prompt recovery in large language models (LLMs) is crucial for understanding how LLMs
work and addressing concerns regarding privacy, copyright, etc. The trend towards …

Outlier-Oriented Poisoning Attack: A Grey-box Approach to Disturb Decision Boundaries by Perturbing Outliers in Multiclass Learning

A Paracha, J Arshad, MB Farah, K Ismail - arxiv preprint arxiv:2411.00519, 2024 - arxiv.org
Poisoning attacks are a primary threat to machine learning models, aiming to compromise
their performance and reliability by manipulating training datasets. This paper introduces a …