Recent advances in adversarial training for adversarial robustness

T Bai, J Luo, J Zhao, B Wen, Q Wang - arxiv preprint arxiv:2102.01356, 2021 - arxiv.org
Adversarial training is one of the most effective approaches defending against adversarial
examples for deep learning models. Unlike other defense strategies, adversarial training …

Adversarial attacks and defenses in deep learning: From a perspective of cybersecurity

S Zhou, C Liu, D Ye, T Zhu, W Zhou, PS Yu - ACM Computing Surveys, 2022 - dl.acm.org
The outstanding performance of deep neural networks has promoted deep learning
applications in a broad set of domains. However, the potential risks caused by adversarial …

Cross-entropy loss functions: Theoretical analysis and applications

A Mao, M Mohri, Y Zhong - International conference on …, 2023 - proceedings.mlr.press
Cross-entropy is a widely used loss function in applications. It coincides with the logistic loss
applied to the outputs of a neural network, when the softmax is used. But, what guarantees …

LAS-AT: adversarial training with learnable attack strategy

X Jia, Y Zhang, B Wu, K Ma… - Proceedings of the …, 2022 - openaccess.thecvf.com
Adversarial training (AT) is always formulated as a minimax problem, of which the
performance depends on the inner optimization that involves the generation of adversarial …

Attacks which do not kill training make adversarial learning stronger

J Zhang, X Xu, B Han, G Niu, L Cui… - International …, 2020 - proceedings.mlr.press
Adversarial training based on the minimax formulation is necessary for obtaining adversarial
robustness of trained models. However, it is conservative or even pessimistic so that it …

On the convergence and robustness of adversarial training

Y Wang, X Ma, J Bailey, J Yi, B Zhou, Q Gu - arxiv preprint arxiv …, 2021 - arxiv.org
Improving the robustness of deep neural networks (DNNs) to adversarial examples is an
important yet challenging problem for secure deep learning. Across existing defense …

Segpgd: An effective and efficient adversarial attack for evaluating and boosting segmentation robustness

J Gu, H Zhao, V Tresp, PHS Torr - European Conference on Computer …, 2022 - Springer
Deep neural network-based image classifications are vulnerable to adversarial
perturbations. The image classifications can be easily fooled by adding artificial small and …

Cfa: Class-wise calibrated fair adversarial training

Z Wei, Y Wang, Y Guo, Y Wang - Proceedings of the IEEE …, 2023 - openaccess.thecvf.com
Adversarial training has been widely acknowledged as the most effective method to improve
the adversarial robustness against adversarial examples for Deep Neural Networks (DNNs) …

Understanding robust overfitting of adversarial training and beyond

C Yu, B Han, L Shen, J Yu, C Gong… - International …, 2022 - proceedings.mlr.press
Robust overfitting widely exists in adversarial training of deep networks. The exact
underlying reasons for this are still not completely understood. Here, we explore the causes …

Machine learning in cybersecurity: a comprehensive survey

D Dasgupta, Z Akhtar, S Sen - The Journal of Defense …, 2022 - journals.sagepub.com
Today's world is highly network interconnected owing to the pervasiveness of small personal
devices (eg, smartphones) as well as large computing devices or services (eg, cloud …