On interpretability of artificial neural networks: A survey

FL Fan, J **ong, M Li, G Wang - IEEE Transactions on …, 2021 - ieeexplore.ieee.org
Deep learning as performed by artificial deep neural networks (DNNs) has achieved great
successes recently in many important areas that deal with text, images, videos, graphs, and …

Topology attack and defense for graph neural networks: An optimization perspective

K Xu, H Chen, S Liu, PY Chen, TW Weng… - arxiv preprint arxiv …, 2019 - arxiv.org
Graph neural networks (GNNs) which apply the deep neural networks to graph data have
achieved significant performance for the task of semi-supervised node classification …

Adversarial t-shirt! evading person detectors in a physical world

K Xu, G Zhang, S Liu, Q Fan, M Sun, H Chen… - Computer vision–ECCV …, 2020 - Springer
It is known that deep neural networks (DNNs) are vulnerable to adversarial attacks. The so-
called physical adversarial examples deceive DNN-based decision makers by attaching …

Nattack: Learning the distributions of adversarial examples for an improved black-box attack on deep neural networks

Y Li, L Li, L Wang, T Zhang… - … conference on machine …, 2019 - proceedings.mlr.press
Powerful adversarial attack methods are vital for understanding how to construct robust
deep neural networks (DNNs) and for thoroughly testing defense techniques. In this paper …

Patch-wise attack for fooling deep neural network

L Gao, Q Zhang, J Song, X Liu, HT Shen - Computer Vision–ECCV 2020 …, 2020 - Springer
By adding human-imperceptible noise to clean images, the resultant adversarial examples
can fool other unknown models. Features of a pixel extracted by deep neural networks …

Adversarial robustness vs. model compression, or both?

S Ye, K Xu, S Liu, H Cheng… - Proceedings of the …, 2019 - openaccess.thecvf.com
It is well known that deep neural networks (DNNs) are vulnerable to adversarial attacks,
which are implemented by adding crafted perturbations onto benign examples. Min-max …

Feature separation and recalibration for adversarial robustness

WJ Kim, Y Cho, J Jung… - Proceedings of the IEEE …, 2023 - openaccess.thecvf.com
Deep neural networks are susceptible to adversarial attacks due to the accumulation of
perturbations in the feature level, and numerous works have boosted model robustness by …

Loss-based attention for deep multiple instance learning

X Shi, F **ng, Y **e, Z Zhang, L Cui… - Proceedings of the AAAI …, 2020 - ojs.aaai.org
Although attention mechanisms have been widely used in deep learning for many tasks,
they are rarely utilized to solve multiple instance learning (MIL) problems, where only a …

Interpreting and improving adversarial robustness of deep neural networks with neuron sensitivity

C Zhang, A Liu, X Liu, Y Xu, H Yu… - IEEE Transactions on …, 2020 - ieeexplore.ieee.org
Deep neural networks (DNNs) are vulnerable to adversarial examples where inputs with
imperceptible perturbations mislead DNNs to incorrect results. Despite the potential risk they …

Proper network interpretability helps adversarial robustness in classification

A Boopathy, S Liu, G Zhang, C Liu… - International …, 2020 - proceedings.mlr.press
Recent works have empirically shown that there exist adversarial examples that can be
hidden from neural network interpretability (namely, making network interpretation maps …