Sok: Certified robustness for deep neural networks

L Li, T **e, B Li - 2023 IEEE symposium on security and privacy …, 2023 - ieeexplore.ieee.org
Great advances in deep neural networks (DNNs) have led to state-of-the-art performance on
a wide range of tasks. However, recent studies have shown that DNNs are vulnerable to …

Toward understanding and boosting adversarial transferability from a distribution perspective

Y Zhu, Y Chen, X Li, K Chen, Y He… - … on Image Processing, 2022 - ieeexplore.ieee.org
Transferable adversarial attacks against Deep neural networks (DNNs) have received broad
attention in recent years. An adversarial example can be crafted by a surrogate model and …

Lot: Layer-wise orthogonal training on improving l2 certified robustness

X Xu, L Li, B Li - Advances in Neural Information Processing …, 2022 - proceedings.neurips.cc
Recent studies show that training deep neural networks (DNNs) with Lipschitz constraints
are able to enhance adversarial robustness and other model properties such as stability. In …

Rethinking model ensemble in transfer-based adversarial attacks

H Chen, Y Zhang, Y Dong, X Yang, H Su… - arxiv preprint arxiv …, 2023 - arxiv.org
It is widely recognized that deep learning models lack robustness to adversarial examples.
An intriguing property of adversarial examples is that they can transfer across different …

On the certified robustness for ensemble models and beyond

Z Yang, L Li, X Xu, B Kailkhura, T **e, B Li - arxiv preprint arxiv …, 2021 - arxiv.org
Recent studies show that deep neural networks (DNN) are vulnerable to adversarial
examples, which aim to mislead DNNs by adding perturbations with small magnitude. To …

Adversarial attack on attackers: Post-process to mitigate black-box score-based query attacks

S Chen, Z Huang, Q Tao, Y Wu… - Advances in Neural …, 2022 - proceedings.neurips.cc
The score-based query attacks (SQAs) pose practical threats to deep neural networks by
crafting adversarial perturbations within dozens of queries, only using the model's output …

A little robustness goes a long way: Leveraging robust features for targeted transfer attacks

J Springer, M Mitchell… - Advances in Neural …, 2021 - proceedings.neurips.cc
Adversarial examples for neural network image classifiers are known to be transferable:
examples optimized to be misclassified by a source classifier are often misclassified as well …

Self-ensemble protection: Training checkpoints are good data protectors

S Chen, G Yuan, X Cheng, Y Gong, M Qin… - arxiv preprint arxiv …, 2022 - arxiv.org
As data becomes increasingly vital, a company would be very cautious about releasing data,
because the competitors could use it to train high-performance models, thereby posing a …

Why does little robustness help? a further step towards understanding adversarial transferability

Y Zhang, S Hu, LY Zhang, J Shi, M Li… - … IEEE Symposium on …, 2024 - ieeexplore.ieee.org
Adversarial examples for deep neural networks (DNNs) are transferable: examples that
successfully fool one white-box surrogate model can also deceive other black-box models …

Understanding and improving ensemble adversarial defense

Y Deng, T Mu - Advances in Neural Information Processing …, 2024 - proceedings.neurips.cc
The strategy of ensemble has become popular in adversarial defense, which trains multiple
base classifiers to defend against adversarial attacks in a cooperative manner. Despite the …