[HTML][HTML] A review of uncertainty quantification in deep learning: Techniques, applications and challenges

M Abdar, F Pourpanah, S Hussain, D Rezazadegan… - Information fusion, 2021 - Elsevier
Uncertainty quantification (UQ) methods play a pivotal role in reducing the impact of
uncertainties during both optimization and decision making processes. They have been …

Advances in adversarial attacks and defenses in computer vision: A survey

N Akhtar, A Mian, N Kardan, M Shah - IEEE Access, 2021 - ieeexplore.ieee.org
Deep Learning is the most widely used tool in the contemporary field of computer vision. Its
ability to accurately solve complex problems is employed in vision research to learn deep …

Threat of adversarial attacks on deep learning in computer vision: A survey

N Akhtar, A Mian - Ieee Access, 2018 - ieeexplore.ieee.org
Deep learning is at the heart of the current rise of artificial intelligence. In the field of
computer vision, it has become the workhorse for applications ranging from self-driving cars …

Interpreting adversarial examples in deep learning: A review

S Han, C Lin, C Shen, Q Wang, X Guan - ACM Computing Surveys, 2023 - dl.acm.org
Deep learning technology is increasingly being applied in safety-critical scenarios but has
recently been found to be susceptible to imperceptible adversarial perturbations. This raises …

Adv-makeup: A new imperceptible and transferable attack on face recognition

B Yin, W Wang, T Yao, J Guo, Z Kong, S Ding… - arxiv preprint arxiv …, 2021 - arxiv.org
Deep neural networks, particularly face recognition models, have been shown to be
vulnerable to both digital and physical adversarial examples. However, existing adversarial …

Towards robustness against natural language word substitutions

X Dong, AT Luu, R Ji, H Liu - arxiv preprint arxiv:2107.13541, 2021 - arxiv.org
Robustness against word substitutions has a well-defined and widely acceptable form, ie,
using semantically similar words as substitutions, and thus it is considered as a fundamental …

A survey on universal adversarial attack

C Zhang, P Benz, C Lin, A Karjauv, J Wu… - arxiv preprint arxiv …, 2021 - arxiv.org
The intriguing phenomenon of adversarial examples has attracted significant attention in
machine learning and what might be more surprising to the community is the existence of …

Bias-based universal adversarial patch attack for automatic check-out

A Liu, J Wang, X Liu, B Cao, C Zhang, H Yu - Computer Vision–ECCV …, 2020 - Springer
Adversarial examples are inputs with imperceptible perturbations that easily misleading
deep neural networks (DNNs). Recently, adversarial patch, with noise confined to a small …

Adversarial threats to deepfake detection: A practical perspective

P Neekhara, B Dolhansky, J Bitton… - Proceedings of the …, 2021 - openaccess.thecvf.com
Facially manipulated images and videos or DeepFakes can be used maliciously to fuel
misinformation or defame individuals. Therefore, detecting DeepFakes is crucial to increase …

Data-free universal adversarial perturbation and black-box attack

C Zhang, P Benz, A Karjauv… - Proceedings of the IEEE …, 2021 - openaccess.thecvf.com
Universal adversarial perturbation (UAP), ie a single perturbation to fool the network for most
images, is widely recognized as a more practical attack because the UAP can be generated …