Reverse engineering of deceptions on machine-and human-centric attacks

Y Yao, X Guo, V Asnani, Y Gong, J Liu… - … and Trends® in …, 2024‏ - nowpublishers.com
This work presents a comprehensive exploration of Reverse Engineering of Deceptions
(RED) in the field of adversarial machine learning. It delves into the intricacies of machine …

Reverse engineering of imperceptible adversarial image perturbations

Y Gong, Y Yao, Y Li, Y Zhang, X Liu, X Lin… - arxiv preprint arxiv …, 2022‏ - arxiv.org
It has been well recognized that neural network based image classifiers are easily fooled by
images with tiny perturbations crafted by an adversary. There has been a vast volume of …

Instant Adversarial Purification with Adversarial Consistency Distillation

CT Lei, HM Yam, Z Guo, CP Lau - arxiv preprint arxiv:2408.17064, 2024‏ - arxiv.org
Neural networks, despite their remarkable performance in widespread applications,
including image classification, are also known to be vulnerable to subtle adversarial noise …

Adversarial attacks and robust defenses in deep learning

CP Lau, J Liu, WA Lin, H Souri, P Khorramshahi… - Handbook of …, 2023‏ - Elsevier
Deep learning models have shown exceptional performance in many applications, including
computer vision, natural language processing, and speech processing. However, if no …

On trace of pgd-like adversarial attacks

M Zhou, VM Patel - International Conference on Pattern Recognition, 2025‏ - Springer
Adversarial attacks pose security concerns to deep learning applications, but their
characteristics are under-explored. Yet largely imperceptible, a strong trace could have …

MMAD-Purify: A Precision-Optimized Framework for Efficient and Scalable Multi-Modal Attacks

X Liu, Z Guo, S Huang, CP Lau - arxiv preprint arxiv:2410.14089, 2024‏ - arxiv.org
Neural networks have achieved remarkable performance across a wide range of tasks, yet
they remain susceptible to adversarial perturbations, which pose significant risks in safety …

Can Adversarial Examples Be Parsed to Reveal Victim Model Information?

Y Yao, J Liu, Y Gong, X Liu, Y Wang, X Lin… - arxiv preprint arxiv …, 2023‏ - arxiv.org
Numerous adversarial attack methods have been developed to generate imperceptible
image perturbations that can cause erroneous predictions of state-of-the-art machine …