Adversarial attacks and defenses in machine learning-empowered communication systems and networks: A contemporary survey

Y Wang, T Sun, S Li, X Yuan, W Ni… - … Surveys & Tutorials, 2023 - ieeexplore.ieee.org
Adversarial attacks and defenses in machine learning and deep neural network (DNN) have
been gaining significant attention due to the rapidly growing applications of deep learning in …

Interpretability research of deep learning: A literature survey

B Xua, G Yang - Information Fusion, 2024 - Elsevier
Deep learning (DL) has been widely used in various fields. However, its black-box nature
limits people's understanding and trust in its decision-making process. Therefore, it becomes …

Binary neural networks: A survey

H Qin, R Gong, X Liu, X Bai, J Song, N Sebe - Pattern Recognition, 2020 - Elsevier
The binary neural network, largely saving the storage and computation, serves as a
promising technique for deploying deep models on resource-limited devices. However, the …

Understanding adversarial attacks on deep learning based medical image analysis systems

X Ma, Y Niu, L Gu, Y Wang, Y Zhao, J Bailey, F Lu - Pattern Recognition, 2021 - Elsevier
Deep neural networks (DNNs) have become popular for medical image analysis tasks like
cancer diagnosis and lesion detection. However, a recent study demonstrates that medical …

{X-Adv}: Physical adversarial object attacks against x-ray prohibited item detection

A Liu, J Guo, J Wang, S Liang, R Tao, W Zhou… - 32nd USENIX Security …, 2023 - usenix.org
Adversarial attacks are valuable for evaluating the robustness of deep learning models.
Existing attacks are primarily conducted on the visible light spectrum (eg, pixel-wise texture …

Dual attention suppression attack: Generate adversarial camouflage in physical world

J Wang, A Liu, Z Yin, S Liu… - Proceedings of the …, 2021 - openaccess.thecvf.com
Deep learning models are vulnerable to adversarial examples. As a more threatening type
for practical deep learning systems, physical adversarial examples have received extensive …

Adversarial training methods for deep learning: A systematic review

W Zhao, S Alwidian, QH Mahmoud - Algorithms, 2022 - mdpi.com
Deep neural networks are exposed to the risk of adversarial attacks via the fast gradient sign
method (FGSM), projected gradient descent (PGD) attacks, and other attack algorithms …

[HTML][HTML] Deepfacelab: Integrated, flexible and extensible face-swap** framework

K Liu, I Perov, D Gao, N Chervoniy, W Zhou… - Pattern Recognition, 2023 - Elsevier
Face swap** has drawn a lot of attention for its compelling performance. However, current
deepfake methods suffer the effects of obscure workflow and poor performance. To solve …

Bias-based universal adversarial patch attack for automatic check-out

A Liu, J Wang, X Liu, B Cao, C Zhang, H Yu - Computer Vision–ECCV …, 2020 - Springer
Adversarial examples are inputs with imperceptible perturbations that easily misleading
deep neural networks (DNNs). Recently, adversarial patch, with noise confined to a small …

Towards benchmarking and assessing visual naturalness of physical world adversarial attacks

S Li, S Zhang, G Chen, D Wang… - Proceedings of the …, 2023 - openaccess.thecvf.com
Physical world adversarial attack is a highly practical and threatening attack, which fools real
world deep learning systems by generating conspicuous and maliciously crafted real world …