Interpreting adversarial examples in deep learning: A review

S Han, C Lin, C Shen, Q Wang, X Guan - ACM Computing Surveys, 2023 - dl.acm.org
Deep learning technology is increasingly being applied in safety-critical scenarios but has
recently been found to be susceptible to imperceptible adversarial perturbations. This raises …

Understanding robust overfitting of adversarial training and beyond

C Yu, B Han, L Shen, J Yu, C Gong… - International …, 2022 - proceedings.mlr.press
Robust overfitting widely exists in adversarial training of deep networks. The exact
underlying reasons for this are still not completely understood. Here, we explore the causes …

Multi-target knowledge distillation via student self-reflection

J Gou, X **ong, B Yu, L Du, Y Zhan, D Tao - International Journal of …, 2023 - Springer
Abstract Knowledge distillation is a simple yet effective technique for deep model
compression, which aims to transfer the knowledge learned by a large teacher model to a …

Ze-HFS: Zentropy-based uncertainty measure for heterogeneous feature selection and knowledge discovery

K Yuan, D Miao, W Pedrycz, W Ding… - IEEE Transactions on …, 2024 - ieeexplore.ieee.org
Knowledge discovery of heterogeneous data is an active topic in knowledge engineering.
Feature selection for heterogeneous data is an important part of effective data analysis …

Improving robustness of vision transformers by reducing sensitivity to patch corruptions

Y Guo, D Stutz, B Schiele - … of the IEEE/CVF Conference on …, 2023 - openaccess.thecvf.com
Despite their success, vision transformers still remain vulnerable to image corruptions, such
as noise or blur. Indeed, we find that the vulnerability mainly stems from the unstable self …

WAT: improve the worst-class robustness in adversarial training

B Li, W Liu - Proceedings of the AAAI conference on artificial …, 2023 - ojs.aaai.org
Abstract Deep Neural Networks (DNN) have been shown to be vulnerable to adversarial
examples. Adversarial training (AT) is a popular and effective strategy to defend against …

Feature separation and recalibration for adversarial robustness

WJ Kim, Y Cho, J Jung… - Proceedings of the IEEE …, 2023 - openaccess.thecvf.com
Deep neural networks are susceptible to adversarial attacks due to the accumulation of
perturbations in the feature level, and numerous works have boosted model robustness by …

Towards intrinsic adversarial robustness through probabilistic training

J Dong, L Yang, Y Wang, X **e… - IEEE Transactions on …, 2023 - ieeexplore.ieee.org
Modern deep neural networks have made numerous breakthroughs in real-world
applications, yet they remain vulnerable to some imperceptible adversarial perturbations …

Explaining Adversarial Robustness of Neural Networks from Clustering Effect Perspective

Y **, X Zhang, J Lou, X Ma… - Proceedings of the …, 2023 - openaccess.thecvf.com
Adversarial training (AT) is the most commonly used mechanism to improve the robustness
of deep neural networks. Recently, a novel adversarial attack against intermediate layers …

Robust weight perturbation for adversarial training

C Yu, B Han, M Gong, L Shen, S Ge, B Du… - arxiv preprint arxiv …, 2022 - arxiv.org
Overfitting widely exists in adversarial robust training of deep networks. An effective remedy
is adversarial weight perturbation, which injects the worst-case weight perturbation during …