Adversarial machine learning for network intrusion detection systems: A comprehensive survey
Network-based Intrusion Detection System (NIDS) forms the frontline defence against
network attacks that compromise the security of the data, systems, and networks. In recent …
network attacks that compromise the security of the data, systems, and networks. In recent …
Adversarial attacks and defenses in images, graphs and text: A review
Deep neural networks (DNN) have achieved unprecedented success in numerous machine
learning tasks in various domains. However, the existence of adversarial examples raises …
learning tasks in various domains. However, the existence of adversarial examples raises …
Using pre-training can improve model robustness and uncertainty
Abstract He et al.(2018) have called into question the utility of pre-training by showing that
training from scratch can often yield similar performance to pre-training. We show that …
training from scratch can often yield similar performance to pre-training. We show that …
Combining graph-based learning with automated data collection for code vulnerability detection
This paper presents FUNDED (Flow-sensitive vUl-Nerability coDE Detection), a novel
learning framework for building vulnerability detection models. Funded leverages the …
learning framework for building vulnerability detection models. Funded leverages the …
Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples
We identify obfuscated gradients, a kind of gradient masking, as a phenomenon that leads to
a false sense of security in defenses against adversarial examples. While defenses that …
a false sense of security in defenses against adversarial examples. While defenses that …
Adversarial examples: Attacks and defenses for deep learning
With rapid progress and significant successes in a wide spectrum of applications, deep
learning is being applied in many safety-critical environments. However, deep neural …
learning is being applied in many safety-critical environments. However, deep neural …
Certified robustness to adversarial examples with differential privacy
Adversarial examples that fool machine learning models, particularly deep neural networks,
have been a topic of intense research interest, with attacks and defenses being developed …
have been a topic of intense research interest, with attacks and defenses being developed …
A survey of safety and trustworthiness of deep neural networks: Verification, testing, adversarial attack and defence, and interpretability
In the past few years, significant progress has been made on deep neural networks (DNNs)
in achieving human-level performance on several long-standing tasks. With the broader …
in achieving human-level performance on several long-standing tasks. With the broader …
Synthesizing robust adversarial examples
Standard methods for generating adversarial examples for neural networks do not
consistently fool neural network classifiers in the physical world due to a combination of …
consistently fool neural network classifiers in the physical world due to a combination of …
Adversarial examples are not easily detected: Bypassing ten detection methods
Neural networks are known to be vulnerable to adversarial examples: inputs that are close
to natural inputs but classified incorrectly. In order to better understand the space of …
to natural inputs but classified incorrectly. In order to better understand the space of …