Adversarial machine learning for network intrusion detection systems: A comprehensive survey

K He, DD Kim, MR Asghar - IEEE Communications Surveys & …, 2023 - ieeexplore.ieee.org
Network-based Intrusion Detection System (NIDS) forms the frontline defence against
network attacks that compromise the security of the data, systems, and networks. In recent …

GAN-based anomaly detection: A review

X **a, X Pan, N Li, X He, L Ma, X Zhang, N Ding - Neurocomputing, 2022 - Elsevier
Supervised learning algorithms have shown limited use in the field of anomaly detection due
to the unpredictability and difficulty in acquiring abnormal samples. In recent years …

Backdoor learning: A survey

Y Li, Y Jiang, Z Li, ST **a - IEEE transactions on neural …, 2022 - ieeexplore.ieee.org
Backdoor attack intends to embed hidden backdoors into deep neural networks (DNNs), so
that the attacked models perform well on benign samples, whereas their predictions will be …

Interpreting adversarial examples in deep learning: A review

S Han, C Lin, C Shen, Q Wang, X Guan - ACM Computing Surveys, 2023 - dl.acm.org
Deep learning technology is increasingly being applied in safety-critical scenarios but has
recently been found to be susceptible to imperceptible adversarial perturbations. This raises …

Reflection backdoor: A natural backdoor attack on deep neural networks

Y Liu, X Ma, J Bailey, F Lu - Computer vision–ECCV 2020: 16th European …, 2020 - Springer
Recent studies have shown that DNNs can be compromised by backdoor attacks crafted at
training time. A backdoor attack installs a backdoor into the victim model by injecting a …

Neural attention distillation: Erasing backdoor triggers from deep neural networks

Y Li, X Lyu, N Koren, L Lyu, B Li, X Ma - arxiv preprint arxiv:2101.05930, 2021 - arxiv.org
Deep neural networks (DNNs) are known vulnerable to backdoor attacks, a training time
attack that injects a trigger pattern into a small proportion of training data so as to control the …

Adversarial weight perturbation helps robust generalization

D Wu, ST **a, Y Wang - Advances in neural information …, 2020 - proceedings.neurips.cc
The study on improving the robustness of deep neural networks against adversarial
examples grows rapidly in recent years. Among them, adversarial training is the most …

Improving adversarial robustness requires revisiting misclassified examples

Y Wang, D Zou, J Yi, J Bailey, X Ma… - … conference on learning …, 2019 - openreview.net
Deep neural networks (DNNs) are vulnerable to adversarial examples crafted by
imperceptible perturbations. A range of defense techniques have been proposed to improve …

{X-Adv}: Physical adversarial object attacks against x-ray prohibited item detection

A Liu, J Guo, J Wang, S Liang, R Tao, W Zhou… - 32nd USENIX Security …, 2023 - usenix.org
Adversarial attacks are valuable for evaluating the robustness of deep learning models.
Existing attacks are primarily conducted on the visible light spectrum (eg, pixel-wise texture …

Adversarial training for free!

A Shafahi, M Najibi, MA Ghiasi, Z Xu… - Advances in neural …, 2019 - proceedings.neurips.cc
Adversarial training, in which a network is trained on adversarial examples, is one of the few
defenses against adversarial attacks that withstands strong attacks. Unfortunately, the high …