Adversarial attacks and defenses in deep learning: From a perspective of cybersecurity
The outstanding performance of deep neural networks has promoted deep learning
applications in a broad set of domains. However, the potential risks caused by adversarial …
applications in a broad set of domains. However, the potential risks caused by adversarial …
Federated learning for generalization, robustness, fairness: A survey and benchmark
Federated learning has emerged as a promising paradigm for privacy-preserving
collaboration among different parties. Recently, with the popularity of federated learning, an …
collaboration among different parties. Recently, with the popularity of federated learning, an …
Badencoder: Backdoor attacks to pre-trained encoders in self-supervised learning
Self-supervised learning in computer vision aims to pre-train an image encoder using a
large amount of unlabeled images or (image, text) pairs. The pre-trained image encoder can …
large amount of unlabeled images or (image, text) pairs. The pre-trained image encoder can …
Shape matters: deformable patch attack
Though deep neural networks (DNNs) have demonstrated excellent performance in
computer vision, they are susceptible and vulnerable to carefully crafted adversarial …
computer vision, they are susceptible and vulnerable to carefully crafted adversarial …
Sok: Certified robustness for deep neural networks
Great advances in deep neural networks (DNNs) have led to state-of-the-art performance on
a wide range of tasks. However, recent studies have shown that DNNs are vulnerable to …
a wide range of tasks. However, recent studies have shown that DNNs are vulnerable to …
Detecting backdoors in pre-trained encoders
Self-supervised learning in computer vision trains on unlabeled data, such as images or
(image, text) pairs, to obtain an image encoder that learns high-quality embeddings for input …
(image, text) pairs, to obtain an image encoder that learns high-quality embeddings for input …
“real attackers don't compute gradients”: bridging the gap between adversarial ml research and practice
Recent years have seen a proliferation of research on adversarial machine learning.
Numerous papers demonstrate powerful algorithmic attacks against a wide variety of …
Numerous papers demonstrate powerful algorithmic attacks against a wide variety of …
Towards practical certifiable patch defense with vision transformer
Patch attacks, one of the most threatening forms of physical attack in adversarial examples,
can lead networks to induce misclassification by modifying pixels arbitrarily in a continuous …
can lead networks to induce misclassification by modifying pixels arbitrarily in a continuous …
Segment and complete: Defending object detectors against adversarial patch attacks with robust patch detection
Object detection plays a key role in many security-critical systems. Adversarial patch attacks,
which are easy to implement in the physical world, pose a serious threat to state-of-the-art …
which are easy to implement in the physical world, pose a serious threat to state-of-the-art …
Nearest is not dearest: Towards practical defense against quantization-conditioned backdoor attacks
Abstract Model quantization is widely used to compress and accelerate deep neural
networks. However recent studies have revealed the feasibility of weaponizing model …
networks. However recent studies have revealed the feasibility of weaponizing model …