Advances in adversarial attacks and defenses in computer vision: A survey

N Akhtar, A Mian, N Kardan, M Shah - IEEE Access, 2021 - ieeexplore.ieee.org
Deep Learning is the most widely used tool in the contemporary field of computer vision. Its
ability to accurately solve complex problems is employed in vision research to learn deep …

Threat of adversarial attacks on deep learning in computer vision: A survey

N Akhtar, A Mian - Ieee Access, 2018 - ieeexplore.ieee.org
Deep learning is at the heart of the current rise of artificial intelligence. In the field of
computer vision, it has become the workhorse for applications ranging from self-driving cars …

Towards data-free model stealing in a hard label setting

S Sanyal, S Addepalli, RV Babu - Proceedings of the IEEE …, 2022 - openaccess.thecvf.com
Abstract Machine learning models deployed as a service (MLaaS) are susceptible to model
stealing attacks, where an adversary attempts to steal the model within a restricted access …

Exploring frequency adversarial attacks for face forgery detection

S Jia, C Ma, T Yao, B Yin, S Ding… - Proceedings of the …, 2022 - openaccess.thecvf.com
Various facial manipulation techniques have drawn serious public concerns in morality,
security, and privacy. Although existing face forgery classifiers achieve promising …

Adv-makeup: A new imperceptible and transferable attack on face recognition

B Yin, W Wang, T Yao, J Guo, Z Kong, S Ding… - ar** practical face recognition (FR) attacks is due to the black-
box nature of the target FR model, ie, inaccessible gradient and parameter information to …

Towards efficient data free black-box adversarial attack

J Zhang, B Li, J Xu, S Wu, S Ding… - Proceedings of the …, 2022 - openaccess.thecvf.com
Classic black-box adversarial attacks can take advantage of transferable adversarial
examples generated by a similar substitute model to successfully fool the target model …

Adv-attribute: Inconspicuous and transferable adversarial attack on face recognition

S Jia, B Yin, T Yao, S Ding, C Shen… - Advances in Neural …, 2022 - proceedings.neurips.cc
Deep learning models have shown their vulnerability when dealing with adversarial attacks.
Existing attacks almost perform on low-level instances, such as pixels and super-pixels, and …

Robustness of sam: Segment anything under corruptions and beyond

Y Qiao, C Zhang, T Kang, D Kim, C Zhang… - arxiv preprint arxiv …, 2023 - arxiv.org
Segment anything model (SAM), as the name suggests, is claimed to be capable of cutting
out any object and demonstrates impressive zero-shot transfer performance with the …

Data-free knowledge transfer: A survey

Y Liu, W Zhang, J Wang, J Wang - arxiv preprint arxiv:2112.15278, 2021 - arxiv.org
In the last decade, many deep learning models have been well trained and made a great
success in various fields of machine intelligence, especially for computer vision and natural …