Advances in adversarial attacks and defenses in computer vision: A survey

N Akhtar, A Mian, N Kardan, M Shah - IEEE Access, 2021 - ieeexplore.ieee.org
Deep Learning is the most widely used tool in the contemporary field of computer vision. Its
ability to accurately solve complex problems is employed in vision research to learn deep …

A survey on adversarial attacks in computer vision: Taxonomy, visualization and future directions

T Long, Q Gao, L Xu, Z Zhou - Computers & Security, 2022 - Elsevier
Deep learning has been widely applied in various fields such as computer vision, natural
language processing, and data mining. Although deep learning has achieved significant …

Threat of adversarial attacks on deep learning in computer vision: A survey

N Akhtar, A Mian - Ieee Access, 2018 - ieeexplore.ieee.org
Deep learning is at the heart of the current rise of artificial intelligence. In the field of
computer vision, it has become the workhorse for applications ranging from self-driving cars …

Towards data-free model stealing in a hard label setting

S Sanyal, S Addepalli, RV Babu - Proceedings of the IEEE …, 2022 - openaccess.thecvf.com
Abstract Machine learning models deployed as a service (MLaaS) are susceptible to model
stealing attacks, where an adversary attempts to steal the model within a restricted access …

Towards efficient data free black-box adversarial attack

J Zhang, B Li, J Xu, S Wu, S Ding… - Proceedings of the …, 2022 - openaccess.thecvf.com
Classic black-box adversarial attacks can take advantage of transferable adversarial
examples generated by a similar substitute model to successfully fool the target model …

Adversarial attack and defense: A survey

H Liang, E He, Y Zhao, Z Jia, H Li - Electronics, 2022 - mdpi.com
In recent years, artificial intelligence technology represented by deep learning has achieved
remarkable results in image recognition, semantic analysis, natural language processing …

Adv-attribute: Inconspicuous and transferable adversarial attack on face recognition

S Jia, B Yin, T Yao, S Ding, C Shen… - Advances in Neural …, 2022 - proceedings.neurips.cc
Deep learning models have shown their vulnerability when dealing with adversarial attacks.
Existing attacks almost perform on low-level instances, such as pixels and super-pixels, and …

Threatening patch attacks on object detection in optical remote sensing images

X Sun, G Cheng, L Pei, H Li… - IEEE Transactions on …, 2023 - ieeexplore.ieee.org
Advanced patch attacks (PAs) on object detection in natural images have pointed out the
great safety vulnerability in methods based on deep neural networks (DNNs). However, little …

Black-box attacks on sequential recommenders via data-free model extraction

Z Yue, Z He, H Zeng, J McAuley - … of the 15th ACM conference on …, 2021 - dl.acm.org
We investigate whether model extraction can be used to 'steal'the weights of sequential
recommender systems, and the potential threats posed to victims of such attacks. This type …

Learning with noisy labels via sparse regularization

X Zhou, X Liu, C Wang, D Zhai… - Proceedings of the …, 2021 - openaccess.thecvf.com
Learning with noisy labels is an important and challenging task for training accurate deep
neural networks. However, some commonly-used loss functions, such as Cross Entropy …