Adversarial machine learning for network intrusion detection systems: A comprehensive survey

K He, DD Kim, MR Asghar - IEEE Communications Surveys & …, 2023 - ieeexplore.ieee.org
Network-based Intrusion Detection System (NIDS) forms the frontline defence against
network attacks that compromise the security of the data, systems, and networks. In recent …

Adversarial attacks and defenses in images, graphs and text: A review

H Xu, Y Ma, HC Liu, D Deb, H Liu, JL Tang… - International journal of …, 2020 - Springer
Deep neural networks (DNN) have achieved unprecedented success in numerous machine
learning tasks in various domains. However, the existence of adversarial examples raises …

Using pre-training can improve model robustness and uncertainty

D Hendrycks, K Lee, M Mazeika - … conference on machine …, 2019 - proceedings.mlr.press
Abstract He et al.(2018) have called into question the utility of pre-training by showing that
training from scratch can often yield similar performance to pre-training. We show that …

Combining graph-based learning with automated data collection for code vulnerability detection

H Wang, G Ye, Z Tang, SH Tan… - IEEE Transactions …, 2020 - ieeexplore.ieee.org
This paper presents FUNDED (Flow-sensitive vUl-Nerability coDE Detection), a novel
learning framework for building vulnerability detection models. Funded leverages the …

Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples

A Athalye, N Carlini, D Wagner - International conference on …, 2018 - proceedings.mlr.press
We identify obfuscated gradients, a kind of gradient masking, as a phenomenon that leads to
a false sense of security in defenses against adversarial examples. While defenses that …

Adversarial examples: Attacks and defenses for deep learning

X Yuan, P He, Q Zhu, X Li - IEEE transactions on neural …, 2019 - ieeexplore.ieee.org
With rapid progress and significant successes in a wide spectrum of applications, deep
learning is being applied in many safety-critical environments. However, deep neural …

Certified robustness to adversarial examples with differential privacy

M Lecuyer, V Atlidakis, R Geambasu… - … IEEE symposium on …, 2019 - ieeexplore.ieee.org
Adversarial examples that fool machine learning models, particularly deep neural networks,
have been a topic of intense research interest, with attacks and defenses being developed …

A survey of safety and trustworthiness of deep neural networks: Verification, testing, adversarial attack and defence, and interpretability

X Huang, D Kroening, W Ruan, J Sharp, Y Sun… - Computer Science …, 2020 - Elsevier
In the past few years, significant progress has been made on deep neural networks (DNNs)
in achieving human-level performance on several long-standing tasks. With the broader …

Synthesizing robust adversarial examples

A Athalye, L Engstrom, A Ilyas… - … conference on machine …, 2018 - proceedings.mlr.press
Standard methods for generating adversarial examples for neural networks do not
consistently fool neural network classifiers in the physical world due to a combination of …

Adversarial examples are not easily detected: Bypassing ten detection methods

N Carlini, D Wagner - Proceedings of the 10th ACM workshop on …, 2017 - dl.acm.org
Neural networks are known to be vulnerable to adversarial examples: inputs that are close
to natural inputs but classified incorrectly. In order to better understand the space of …