AI robustness: a human-centered perspective on technological challenges and opportunities

A Tocchetti, L Corti, A Balayn, M Yurrita… - ACM Computing …, 2022 - dl.acm.org
Despite the impressive performance of Artificial Intelligence (AI) systems, their robustness
remains elusive and constitutes a key issue that impedes large-scale adoption. Besides …

Stable adversarial learning under distributional shifts

J Liu, Z Shen, P Cui, L Zhou, K Kuang, B Li… - Proceedings of the AAAI …, 2021 - ojs.aaai.org
Abstract Machine learning algorithms with empirical risk minimization are vulnerable under
distributional shifts due to the greedy adoption of all the correlations found in training data …

Distributionally robust learning with stable adversarial training

J Liu, Z Shen, P Cui, L Zhou… - IEEE Transactions on …, 2022 - ieeexplore.ieee.org
Machine learning algorithms with empirical risk minimization are vulnerable under
distributional shifts due to the greedy adoption of all the correlations found in training data …

Mask-guided noise restriction adversarial attacks for image classification

Y Duan, X Zhou, J Zou, J Qiu, J Zhang, Z Pan - Computers & Security, 2021 - Elsevier
Deep neural networks (DNNs) are vulnerable to adversarial examples, which are generated
by adding small noises to the benign examples, but make a deep model output inaccurate …

Enhancing spiking neural networks with hybrid top-down attention

F Liu, R Zhao - Frontiers in Neuroscience, 2022 - frontiersin.org
As the representatives of brain-inspired models at the neuronal level, spiking neural
networks (SNNs) have shown great promise in processing spatiotemporal information with …

Denoised internal models: a brain-inspired autoencoder against adversarial attacks

KY Liu, XY Li, YR Lai, H Su, JC Wang, CX Guo… - Machine Intelligence …, 2022 - Springer
Despite its great success, deep learning severely suffers from robustness; ie, deep neural
networks are very vulnerable to adversarial attacks, even the simplest ones. Inspired by …

Impact of attention on adversarial robustness of image classification models

P Agrawal, NS Punn, SK Sonbhadra… - … Conference on Big …, 2021 - ieeexplore.ieee.org
Adversarial attacks against deep learning models have gained significant attention and
recent works have pro-posed explanations for the existence of adversarial examples and …

Defending Adversarial Attacks Against ASV Systems Using Spectral Masking

S Sreekanth, K Sri Rama Murty - Circuits, Systems, and Signal Processing, 2024 - Springer
Automatic speaker verification (ASV) is the task of authenticating the claimed identity of a
speaker from his/her voice characteristics. Despite the improved performance achieved by …

Stationary Point Losses for Robust Model

W Gao, D Zhang, Y Li, Z Guo, O Petrosian - arxiv preprint arxiv …, 2023 - arxiv.org
The inability to guarantee robustness is one of the major obstacles to the application of deep
learning models in security-demanding domains. We identify that the most commonly used …

Multi-stationary point losses for robust model

W Gao, Y Li, J Gao, Z Guo, D Zhang - openreview.net
We identify that cross-entropy (CE) loss does not guarantee robust boundary for neural
networks. The reason is that CE loss has only one asymptotic stationary point. It stops …