Interpreting adversarial examples in deep learning: A review
Deep learning technology is increasingly being applied in safety-critical scenarios but has
recently been found to be susceptible to imperceptible adversarial perturbations. This raises …
recently been found to be susceptible to imperceptible adversarial perturbations. This raises …
Understanding robust overfitting of adversarial training and beyond
Robust overfitting widely exists in adversarial training of deep networks. The exact
underlying reasons for this are still not completely understood. Here, we explore the causes …
underlying reasons for this are still not completely understood. Here, we explore the causes …
Multi-target knowledge distillation via student self-reflection
Abstract Knowledge distillation is a simple yet effective technique for deep model
compression, which aims to transfer the knowledge learned by a large teacher model to a …
compression, which aims to transfer the knowledge learned by a large teacher model to a …
Ze-HFS: Zentropy-based uncertainty measure for heterogeneous feature selection and knowledge discovery
Knowledge discovery of heterogeneous data is an active topic in knowledge engineering.
Feature selection for heterogeneous data is an important part of effective data analysis …
Feature selection for heterogeneous data is an important part of effective data analysis …
Improving robustness of vision transformers by reducing sensitivity to patch corruptions
Despite their success, vision transformers still remain vulnerable to image corruptions, such
as noise or blur. Indeed, we find that the vulnerability mainly stems from the unstable self …
as noise or blur. Indeed, we find that the vulnerability mainly stems from the unstable self …
WAT: improve the worst-class robustness in adversarial training
Abstract Deep Neural Networks (DNN) have been shown to be vulnerable to adversarial
examples. Adversarial training (AT) is a popular and effective strategy to defend against …
examples. Adversarial training (AT) is a popular and effective strategy to defend against …
Feature separation and recalibration for adversarial robustness
Deep neural networks are susceptible to adversarial attacks due to the accumulation of
perturbations in the feature level, and numerous works have boosted model robustness by …
perturbations in the feature level, and numerous works have boosted model robustness by …
Towards intrinsic adversarial robustness through probabilistic training
Modern deep neural networks have made numerous breakthroughs in real-world
applications, yet they remain vulnerable to some imperceptible adversarial perturbations …
applications, yet they remain vulnerable to some imperceptible adversarial perturbations …
Explaining Adversarial Robustness of Neural Networks from Clustering Effect Perspective
Adversarial training (AT) is the most commonly used mechanism to improve the robustness
of deep neural networks. Recently, a novel adversarial attack against intermediate layers …
of deep neural networks. Recently, a novel adversarial attack against intermediate layers …
Robust weight perturbation for adversarial training
Overfitting widely exists in adversarial robust training of deep networks. An effective remedy
is adversarial weight perturbation, which injects the worst-case weight perturbation during …
is adversarial weight perturbation, which injects the worst-case weight perturbation during …