Adversarial attacks and defenses in deep learning: From a perspective of cybersecurity

S Zhou, C Liu, D Ye, T Zhu, W Zhou, PS Yu - ACM Computing Surveys, 2022 - dl.acm.org
The outstanding performance of deep neural networks has promoted deep learning
applications in a broad set of domains. However, the potential risks caused by adversarial …

Interpreting adversarial examples in deep learning: A review

S Han, C Lin, C Shen, Q Wang, X Guan - ACM Computing Surveys, 2023 - dl.acm.org
Deep learning technology is increasingly being applied in safety-critical scenarios but has
recently been found to be susceptible to imperceptible adversarial perturbations. This raises …

Trustworthy graph neural networks: Aspects, methods and trends

H Zhang, B Wu, X Yuan, S Pan, H Tong… - arxiv preprint arxiv …, 2022 - arxiv.org
Graph neural networks (GNNs) have emerged as a series of competent graph learning
methods for diverse real-world scenarios, ranging from daily applications like …

On the effectiveness of lipschitz-driven rehearsal in continual learning

L Bonicelli, M Boschini, A Porrello… - Advances in …, 2022 - proceedings.neurips.cc
Rehearsal approaches enjoy immense popularity with Continual Learning (CL)
practitioners. These methods collect samples from previously encountered data distributions …

Combating bilateral edge noise for robust link prediction

Z Zhou, J Yao, J Liu, X Guo, Q Yao… - Advances in …, 2024 - proceedings.neurips.cc
Although link prediction on graphs has achieved great success with the development of
graph neural networks (GNNs), the potential robustness under the edge noise is still less …

AdvRush: Searching for adversarially robust neural architectures

J Mok, B Na, H Choe, S Yoon - Proceedings of the IEEE …, 2021 - openaccess.thecvf.com
Deep neural networks continue to awe the world with their remarkable performance. Their
predictions, however, are prone to be corrupted by adversarial examples that are …

Safari: Versatile and efficient evaluations for robustness of interpretability

W Huang, X Zhao, G **… - Proceedings of the IEEE …, 2023 - openaccess.thecvf.com
Abstract Interpretability of Deep Learning (DL) is a barrier to trustworthy AI. Despite great
efforts made by the Explainable AI (XAI) community, explanations lack robustness …

Res: A robust framework for guiding visual explanation

Y Gao, TS Sun, G Bai, S Gu, SR Hong… - proceedings of the 28th …, 2022 - dl.acm.org
Despite the fast progress of explanation techniques in modern Deep Neural Networks
(DNNs) where the main focus is handling" how to generate the explanations", advanced …

Prior and posterior networks: A survey on evidential deep learning methods for uncertainty estimation

D Ulmer, C Hardmeier, J Frellsen - arxiv preprint arxiv:2110.03051, 2021 - arxiv.org
Popular approaches for quantifying predictive uncertainty in deep neural networks often
involve distributions over weights or multiple models, for instance via Markov Chain …

[HTML][HTML] Adversarial attacks and defenses on ML-and hardware-based IoT device fingerprinting and identification

PMS Sánchez, AH Celdrán, G Bovet… - Future Generation …, 2024 - Elsevier
In the last years, the number of IoT devices deployed has suffered an undoubted explosion,
reaching the scale of billions. However, some new cybersecurity issues have appeared …