Adversarial attacks and defenses in machine learning-empowered communication systems and networks: A contemporary survey
Adversarial attacks and defenses in machine learning and deep neural network (DNN) have
been gaining significant attention due to the rapidly growing applications of deep learning in …
been gaining significant attention due to the rapidly growing applications of deep learning in …
Interpretability research of deep learning: A literature survey
B Xua, G Yang - Information Fusion, 2024 - Elsevier
Deep learning (DL) has been widely used in various fields. However, its black-box nature
limits people's understanding and trust in its decision-making process. Therefore, it becomes …
limits people's understanding and trust in its decision-making process. Therefore, it becomes …
Binary neural networks: A survey
The binary neural network, largely saving the storage and computation, serves as a
promising technique for deploying deep models on resource-limited devices. However, the …
promising technique for deploying deep models on resource-limited devices. However, the …
Understanding adversarial attacks on deep learning based medical image analysis systems
Deep neural networks (DNNs) have become popular for medical image analysis tasks like
cancer diagnosis and lesion detection. However, a recent study demonstrates that medical …
cancer diagnosis and lesion detection. However, a recent study demonstrates that medical …
{X-Adv}: Physical adversarial object attacks against x-ray prohibited item detection
Adversarial attacks are valuable for evaluating the robustness of deep learning models.
Existing attacks are primarily conducted on the visible light spectrum (eg, pixel-wise texture …
Existing attacks are primarily conducted on the visible light spectrum (eg, pixel-wise texture …
Dual attention suppression attack: Generate adversarial camouflage in physical world
Deep learning models are vulnerable to adversarial examples. As a more threatening type
for practical deep learning systems, physical adversarial examples have received extensive …
for practical deep learning systems, physical adversarial examples have received extensive …
Adversarial training methods for deep learning: A systematic review
Deep neural networks are exposed to the risk of adversarial attacks via the fast gradient sign
method (FGSM), projected gradient descent (PGD) attacks, and other attack algorithms …
method (FGSM), projected gradient descent (PGD) attacks, and other attack algorithms …
[HTML][HTML] Deepfacelab: Integrated, flexible and extensible face-swap** framework
Face swap** has drawn a lot of attention for its compelling performance. However, current
deepfake methods suffer the effects of obscure workflow and poor performance. To solve …
deepfake methods suffer the effects of obscure workflow and poor performance. To solve …
Bias-based universal adversarial patch attack for automatic check-out
Adversarial examples are inputs with imperceptible perturbations that easily misleading
deep neural networks (DNNs). Recently, adversarial patch, with noise confined to a small …
deep neural networks (DNNs). Recently, adversarial patch, with noise confined to a small …
Towards benchmarking and assessing visual naturalness of physical world adversarial attacks
Physical world adversarial attack is a highly practical and threatening attack, which fools real
world deep learning systems by generating conspicuous and maliciously crafted real world …
world deep learning systems by generating conspicuous and maliciously crafted real world …