AI robustness: a human-centered perspective on technological challenges and opportunities
Despite the impressive performance of Artificial Intelligence (AI) systems, their robustness
remains elusive and constitutes a key issue that impedes large-scale adoption. Besides …
remains elusive and constitutes a key issue that impedes large-scale adoption. Besides …
Stable adversarial learning under distributional shifts
Abstract Machine learning algorithms with empirical risk minimization are vulnerable under
distributional shifts due to the greedy adoption of all the correlations found in training data …
distributional shifts due to the greedy adoption of all the correlations found in training data …
Distributionally robust learning with stable adversarial training
Machine learning algorithms with empirical risk minimization are vulnerable under
distributional shifts due to the greedy adoption of all the correlations found in training data …
distributional shifts due to the greedy adoption of all the correlations found in training data …
Mask-guided noise restriction adversarial attacks for image classification
Y Duan, X Zhou, J Zou, J Qiu, J Zhang, Z Pan - Computers & Security, 2021 - Elsevier
Deep neural networks (DNNs) are vulnerable to adversarial examples, which are generated
by adding small noises to the benign examples, but make a deep model output inaccurate …
by adding small noises to the benign examples, but make a deep model output inaccurate …
Enhancing spiking neural networks with hybrid top-down attention
F Liu, R Zhao - Frontiers in Neuroscience, 2022 - frontiersin.org
As the representatives of brain-inspired models at the neuronal level, spiking neural
networks (SNNs) have shown great promise in processing spatiotemporal information with …
networks (SNNs) have shown great promise in processing spatiotemporal information with …
Denoised internal models: a brain-inspired autoencoder against adversarial attacks
Despite its great success, deep learning severely suffers from robustness; ie, deep neural
networks are very vulnerable to adversarial attacks, even the simplest ones. Inspired by …
networks are very vulnerable to adversarial attacks, even the simplest ones. Inspired by …
Impact of attention on adversarial robustness of image classification models
Adversarial attacks against deep learning models have gained significant attention and
recent works have pro-posed explanations for the existence of adversarial examples and …
recent works have pro-posed explanations for the existence of adversarial examples and …
Defending Adversarial Attacks Against ASV Systems Using Spectral Masking
S Sreekanth, K Sri Rama Murty - Circuits, Systems, and Signal Processing, 2024 - Springer
Automatic speaker verification (ASV) is the task of authenticating the claimed identity of a
speaker from his/her voice characteristics. Despite the improved performance achieved by …
speaker from his/her voice characteristics. Despite the improved performance achieved by …
Stationary Point Losses for Robust Model
W Gao, D Zhang, Y Li, Z Guo, O Petrosian - arxiv preprint arxiv …, 2023 - arxiv.org
The inability to guarantee robustness is one of the major obstacles to the application of deep
learning models in security-demanding domains. We identify that the most commonly used …
learning models in security-demanding domains. We identify that the most commonly used …
Multi-stationary point losses for robust model
W Gao, Y Li, J Gao, Z Guo, D Zhang - openreview.net
We identify that cross-entropy (CE) loss does not guarantee robust boundary for neural
networks. The reason is that CE loss has only one asymptotic stationary point. It stops …
networks. The reason is that CE loss has only one asymptotic stationary point. It stops …