Adversarial example detection for DNN models: A review and experimental comparison

A Aldahdooh, W Hamidouche, SA Fezza… - Artificial Intelligence …, 2022 - Springer
Deep learning (DL) has shown great success in many human-related tasks, which has led to
its adoption in many computer vision based applications, such as security surveillance …

Adversarial machine learning in image classification: A survey toward the defender's perspective

GR Machado, E Silva, RR Goldschmidt - ACM Computing Surveys …, 2021 - dl.acm.org
Deep Learning algorithms have achieved state-of-the-art performance for Image
Classification. For this reason, they have been used even in security-critical applications …

How to certify machine learning based safety-critical systems? A systematic literature review

F Tambon, G Laberge, L An, A Nikanjam… - Automated Software …, 2022 - Springer
Abstract Context Machine Learning (ML) has been at the heart of many innovations over the
past years. However, including it in so-called “safety-critical” systems such as automotive or …

Adversarial attacks against face recognition: A comprehensive study

F Vakhshiteh, A Nickabadi, R Ramachandra - IEEE Access, 2021 - ieeexplore.ieee.org
Face recognition (FR) systems have demonstrated reliable verification performance,
suggesting suitability for real-world applications ranging from photo tagging in social media …

A state-of-the-art review on adversarial machine learning in image classification

A Bajaj, DK Vishwakarma - Multimedia Tools and Applications, 2024 - Springer
Computer vision applications like traffic monitoring, security checks, self-driving cars,
medical imaging, etc., rely heavily on machine learning models. It raises an essential …

[HTML][HTML] Reconstruction-based adversarial attack detection in vision-based autonomous driving systems

M Hussain, JE Hong - Machine Learning and Knowledge Extraction, 2023 - mdpi.com
The perception system is a safety-critical component that directly impacts the overall safety
of autonomous driving systems (ADSs). It is imperative to ensure the robustness of the deep …

On the defense of spoofing countermeasures against adversarial attacks

L Nguyen-Vu, TP Doan, M Bui, K Hong, S Jung - IEEE Access, 2023 - ieeexplore.ieee.org
Advances in speech synthesis have exposed the vulnerability of spoofing countermeasure
(CM) systems. Adversarial attacks exacerbate this problem, mainly due to the reliance of …

UNICAD: A unified approach for attack detection, noise reduction and novel class identification

AL Pellicer, K Giatgong, Y Li, N Suri… - … Joint Conference on …, 2024 - ieeexplore.ieee.org
As the use of Deep Neural Networks (DNNs) becomes pervasive, their vulnerability to
adversarial attacks and limitations in handling unseen classes poses significant challenges …

Adversarial training on purification (atop): Advancing both robustness and generalization

G Lin, C Li, J Zhang, T Tanaka, Q Zhao - arxiv preprint arxiv:2401.16352, 2024 - arxiv.org
The deep neural networks are known to be vulnerable to well-designed adversarial attacks.
The most successful defense technique based on adversarial training (AT) can achieve …

Detection of adversarial examples in deep neural networks with natural scene statistics

A Kherchouche, SA Fezza… - … Joint Conference on …, 2020 - ieeexplore.ieee.org
Recent studies have demonstrated that the deep neural networks (DNNs) are vulnerable to
carefully-crafted perturbations added to a legitimate input image. Such perturbed images are …