Sok: Certified robustness for deep neural networks
Great advances in deep neural networks (DNNs) have led to state-of-the-art performance on
a wide range of tasks. However, recent studies have shown that DNNs are vulnerable to …
a wide range of tasks. However, recent studies have shown that DNNs are vulnerable to …
Toward understanding and boosting adversarial transferability from a distribution perspective
Transferable adversarial attacks against Deep neural networks (DNNs) have received broad
attention in recent years. An adversarial example can be crafted by a surrogate model and …
attention in recent years. An adversarial example can be crafted by a surrogate model and …
Lot: Layer-wise orthogonal training on improving l2 certified robustness
Recent studies show that training deep neural networks (DNNs) with Lipschitz constraints
are able to enhance adversarial robustness and other model properties such as stability. In …
are able to enhance adversarial robustness and other model properties such as stability. In …
Rethinking model ensemble in transfer-based adversarial attacks
It is widely recognized that deep learning models lack robustness to adversarial examples.
An intriguing property of adversarial examples is that they can transfer across different …
An intriguing property of adversarial examples is that they can transfer across different …
On the certified robustness for ensemble models and beyond
Recent studies show that deep neural networks (DNN) are vulnerable to adversarial
examples, which aim to mislead DNNs by adding perturbations with small magnitude. To …
examples, which aim to mislead DNNs by adding perturbations with small magnitude. To …
Adversarial attack on attackers: Post-process to mitigate black-box score-based query attacks
The score-based query attacks (SQAs) pose practical threats to deep neural networks by
crafting adversarial perturbations within dozens of queries, only using the model's output …
crafting adversarial perturbations within dozens of queries, only using the model's output …
A little robustness goes a long way: Leveraging robust features for targeted transfer attacks
Adversarial examples for neural network image classifiers are known to be transferable:
examples optimized to be misclassified by a source classifier are often misclassified as well …
examples optimized to be misclassified by a source classifier are often misclassified as well …
Self-ensemble protection: Training checkpoints are good data protectors
As data becomes increasingly vital, a company would be very cautious about releasing data,
because the competitors could use it to train high-performance models, thereby posing a …
because the competitors could use it to train high-performance models, thereby posing a …
Why does little robustness help? a further step towards understanding adversarial transferability
Adversarial examples for deep neural networks (DNNs) are transferable: examples that
successfully fool one white-box surrogate model can also deceive other black-box models …
successfully fool one white-box surrogate model can also deceive other black-box models …
Understanding and improving ensemble adversarial defense
The strategy of ensemble has become popular in adversarial defense, which trains multiple
base classifiers to defend against adversarial attacks in a cooperative manner. Despite the …
base classifiers to defend against adversarial attacks in a cooperative manner. Despite the …