Physical adversarial attack meets computer vision: A decade survey

H Wei, H Tang, X Jia, Z Wang, H Yu, Z Li… - … on Pattern Analysis …, 2024 - ieeexplore.ieee.org
Despite the impressive achievements of Deep Neural Networks (DNNs) in computer vision,
their vulnerability to adversarial attacks remains a critical concern. Extensive research has …

A survey on physical adversarial attack in computer vision

D Wang, W Yao, T Jiang, G Tang, X Chen - arxiv preprint arxiv …, 2022 - arxiv.org
Over the past decade, deep learning has revolutionized conventional tasks that rely on hand-
craft feature extraction with its strong feature learning capability, leading to substantial …

Hoianimator: Generating text-prompt human-object animations using novel perceptive diffusion models

W Song, X Zhang, S Li, Y Gao, A Hao… - Proceedings of the …, 2024 - openaccess.thecvf.com
To date the quest to rapidly and effectively produce human-object interaction (HOI)
animations directly from textual descriptions stands at the forefront of computer vision …

Edge video analytics: A survey on applications, systems and enabling techniques

R Xu, S Razavi, R Zheng - IEEE Communications Surveys & …, 2023 - ieeexplore.ieee.org
Video, as a key driver in the global explosion of digital information, can create tremendous
benefits for human society. Governments and enterprises are deploying innumerable …

Face encryption via frequency-restricted identity-agnostic attacks

X Dong, R Wang, S Liang, A Liu, L **g - Proceedings of the 31st ACM …, 2023 - dl.acm.org
Billions of people are sharing their daily live images on social media everyday. However,
malicious collectors use deep face recognition systems to easily steal their biometric …

Artwork protection against neural style transfer using locally adaptive adversarial color attack

Z Guo, J Dong, Y Qian, K Wang, W Li, Z Guo… - ECAI 2024, 2024 - ebooks.iospress.nl
Neural style transfer (NST) generates new images by combining the style of one image with
the content of another. However, unauthorized NST can exploit artwork, raising concerns …

Artificial Immune System of Secure Face Recognition Against Adversarial Attacks

M Ren, Y Wang, Y Zhu, Y Huang, Z Sun, Q Li… - International Journal of …, 2024 - Springer
Deep learning-based face recognition models are vulnerable to adversarial attacks. In
contrast to general noises, the presence of imperceptible adversarial noises can lead to …

Physical adversarial attacks for surveillance: A survey

K Nguyen, T Fernando, C Fookes… - IEEE Transactions on …, 2023 - ieeexplore.ieee.org
Modern automated surveillance techniques are heavily reliant on deep learning methods.
Despite the superior performance, these learning systems are inherently vulnerable to …

Rethinking impersonation and dodging attacks on face recognition systems

F Zhou, Q Zhou, B Yin, H Zheng, X Lu, L Ma… - Proceedings of the 32nd …, 2024 - dl.acm.org
Face Recognition (FR) systems can be easily deceived by adversarial examples that
manipulate benign face images through imperceptible perturbations. Adversarial attacks on …

Adversarial examples in the physical world: A survey

J Wang, X Liu, J Hu, D Wang, S Wu, T Jiang… - arxiv preprint arxiv …, 2023 - arxiv.org
Deep neural networks (DNNs) have demonstrated high vulnerability to adversarial
examples, raising broad security concerns about their applications. Besides the attacks in …