Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
On interpretability of artificial neural networks: A survey
Deep learning as performed by artificial deep neural networks (DNNs) has achieved great
successes recently in many important areas that deal with text, images, videos, graphs, and …
successes recently in many important areas that deal with text, images, videos, graphs, and …
Topology attack and defense for graph neural networks: An optimization perspective
Graph neural networks (GNNs) which apply the deep neural networks to graph data have
achieved significant performance for the task of semi-supervised node classification …
achieved significant performance for the task of semi-supervised node classification …
Adversarial t-shirt! evading person detectors in a physical world
It is known that deep neural networks (DNNs) are vulnerable to adversarial attacks. The so-
called physical adversarial examples deceive DNN-based decision makers by attaching …
called physical adversarial examples deceive DNN-based decision makers by attaching …
Nattack: Learning the distributions of adversarial examples for an improved black-box attack on deep neural networks
Powerful adversarial attack methods are vital for understanding how to construct robust
deep neural networks (DNNs) and for thoroughly testing defense techniques. In this paper …
deep neural networks (DNNs) and for thoroughly testing defense techniques. In this paper …
Patch-wise attack for fooling deep neural network
By adding human-imperceptible noise to clean images, the resultant adversarial examples
can fool other unknown models. Features of a pixel extracted by deep neural networks …
can fool other unknown models. Features of a pixel extracted by deep neural networks …
Adversarial robustness vs. model compression, or both?
It is well known that deep neural networks (DNNs) are vulnerable to adversarial attacks,
which are implemented by adding crafted perturbations onto benign examples. Min-max …
which are implemented by adding crafted perturbations onto benign examples. Min-max …
Feature separation and recalibration for adversarial robustness
Deep neural networks are susceptible to adversarial attacks due to the accumulation of
perturbations in the feature level, and numerous works have boosted model robustness by …
perturbations in the feature level, and numerous works have boosted model robustness by …
Loss-based attention for deep multiple instance learning
Although attention mechanisms have been widely used in deep learning for many tasks,
they are rarely utilized to solve multiple instance learning (MIL) problems, where only a …
they are rarely utilized to solve multiple instance learning (MIL) problems, where only a …
Interpreting and improving adversarial robustness of deep neural networks with neuron sensitivity
Deep neural networks (DNNs) are vulnerable to adversarial examples where inputs with
imperceptible perturbations mislead DNNs to incorrect results. Despite the potential risk they …
imperceptible perturbations mislead DNNs to incorrect results. Despite the potential risk they …
Proper network interpretability helps adversarial robustness in classification
Recent works have empirically shown that there exist adversarial examples that can be
hidden from neural network interpretability (namely, making network interpretation maps …
hidden from neural network interpretability (namely, making network interpretation maps …