Adversarial attacks and defenses in deep learning: From a perspective of cybersecurity

S Zhou, C Liu, D Ye, T Zhu, W Zhou, PS Yu - ACM Computing Surveys, 2022 - dl.acm.org
The outstanding performance of deep neural networks has promoted deep learning
applications in a broad set of domains. However, the potential risks caused by adversarial …

Federated learning for generalization, robustness, fairness: A survey and benchmark

W Huang, M Ye, Z Shi, G Wan, H Li… - IEEE Transactions on …, 2024 - ieeexplore.ieee.org
Federated learning has emerged as a promising paradigm for privacy-preserving
collaboration among different parties. Recently, with the popularity of federated learning, an …

Badencoder: Backdoor attacks to pre-trained encoders in self-supervised learning

J Jia, Y Liu, NZ Gong - 2022 IEEE Symposium on Security and …, 2022 - ieeexplore.ieee.org
Self-supervised learning in computer vision aims to pre-train an image encoder using a
large amount of unlabeled images or (image, text) pairs. The pre-trained image encoder can …

Shape matters: deformable patch attack

Z Chen, B Li, S Wu, J Xu, S Ding, W Zhang - European conference on …, 2022 - Springer
Though deep neural networks (DNNs) have demonstrated excellent performance in
computer vision, they are susceptible and vulnerable to carefully crafted adversarial …

Sok: Certified robustness for deep neural networks

L Li, T **e, B Li - 2023 IEEE symposium on security and privacy …, 2023 - ieeexplore.ieee.org
Great advances in deep neural networks (DNNs) have led to state-of-the-art performance on
a wide range of tasks. However, recent studies have shown that DNNs are vulnerable to …

Detecting backdoors in pre-trained encoders

S Feng, G Tao, S Cheng, G Shen… - Proceedings of the …, 2023 - openaccess.thecvf.com
Self-supervised learning in computer vision trains on unlabeled data, such as images or
(image, text) pairs, to obtain an image encoder that learns high-quality embeddings for input …

“real attackers don't compute gradients”: bridging the gap between adversarial ml research and practice

G Apruzzese, HS Anderson, S Dambra… - … IEEE Conference on …, 2023 - ieeexplore.ieee.org
Recent years have seen a proliferation of research on adversarial machine learning.
Numerous papers demonstrate powerful algorithmic attacks against a wide variety of …

Towards practical certifiable patch defense with vision transformer

Z Chen, B Li, J Xu, S Wu, S Ding… - Proceedings of the …, 2022 - openaccess.thecvf.com
Patch attacks, one of the most threatening forms of physical attack in adversarial examples,
can lead networks to induce misclassification by modifying pixels arbitrarily in a continuous …

Segment and complete: Defending object detectors against adversarial patch attacks with robust patch detection

J Liu, A Levine, CP Lau… - Proceedings of the …, 2022 - openaccess.thecvf.com
Object detection plays a key role in many security-critical systems. Adversarial patch attacks,
which are easy to implement in the physical world, pose a serious threat to state-of-the-art …

Nearest is not dearest: Towards practical defense against quantization-conditioned backdoor attacks

B Li, Y Cai, H Li, F Xue, Z Li… - Proceedings of the IEEE …, 2024 - openaccess.thecvf.com
Abstract Model quantization is widely used to compress and accelerate deep neural
networks. However recent studies have revealed the feasibility of weaponizing model …