Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Adversarial machine learning for network intrusion detection systems: A comprehensive survey
Network-based Intrusion Detection System (NIDS) forms the frontline defence against
network attacks that compromise the security of the data, systems, and networks. In recent …
network attacks that compromise the security of the data, systems, and networks. In recent …
GAN-based anomaly detection: A review
X **a, X Pan, N Li, X He, L Ma, X Zhang, N Ding - Neurocomputing, 2022 - Elsevier
Supervised learning algorithms have shown limited use in the field of anomaly detection due
to the unpredictability and difficulty in acquiring abnormal samples. In recent years …
to the unpredictability and difficulty in acquiring abnormal samples. In recent years …
Backdoor learning: A survey
Backdoor attack intends to embed hidden backdoors into deep neural networks (DNNs), so
that the attacked models perform well on benign samples, whereas their predictions will be …
that the attacked models perform well on benign samples, whereas their predictions will be …
Interpreting adversarial examples in deep learning: A review
Deep learning technology is increasingly being applied in safety-critical scenarios but has
recently been found to be susceptible to imperceptible adversarial perturbations. This raises …
recently been found to be susceptible to imperceptible adversarial perturbations. This raises …
Reflection backdoor: A natural backdoor attack on deep neural networks
Recent studies have shown that DNNs can be compromised by backdoor attacks crafted at
training time. A backdoor attack installs a backdoor into the victim model by injecting a …
training time. A backdoor attack installs a backdoor into the victim model by injecting a …
Neural attention distillation: Erasing backdoor triggers from deep neural networks
Deep neural networks (DNNs) are known vulnerable to backdoor attacks, a training time
attack that injects a trigger pattern into a small proportion of training data so as to control the …
attack that injects a trigger pattern into a small proportion of training data so as to control the …
Adversarial weight perturbation helps robust generalization
The study on improving the robustness of deep neural networks against adversarial
examples grows rapidly in recent years. Among them, adversarial training is the most …
examples grows rapidly in recent years. Among them, adversarial training is the most …
Improving adversarial robustness requires revisiting misclassified examples
Deep neural networks (DNNs) are vulnerable to adversarial examples crafted by
imperceptible perturbations. A range of defense techniques have been proposed to improve …
imperceptible perturbations. A range of defense techniques have been proposed to improve …
{X-Adv}: Physical adversarial object attacks against x-ray prohibited item detection
Adversarial attacks are valuable for evaluating the robustness of deep learning models.
Existing attacks are primarily conducted on the visible light spectrum (eg, pixel-wise texture …
Existing attacks are primarily conducted on the visible light spectrum (eg, pixel-wise texture …
Adversarial training for free!
Adversarial training, in which a network is trained on adversarial examples, is one of the few
defenses against adversarial attacks that withstands strong attacks. Unfortunately, the high …
defenses against adversarial attacks that withstands strong attacks. Unfortunately, the high …