Better diffusion models further improve adversarial training

Z Wang, T Pang, C Du, M Lin… - … on machine learning, 2023 - proceedings.mlr.press
It has been recognized that the data generated by the denoising diffusion probabilistic
model (DDPM) improves adversarial training. After two years of rapid development in …

Unsolved problems in ml safety

D Hendrycks, N Carlini, J Schulman… - arxiv preprint arxiv …, 2021 - arxiv.org
Machine learning (ML) systems are rapidly increasing in size, are acquiring new
capabilities, and are increasingly deployed in high-stakes settings. As with other powerful …

Data augmentation alone can improve adversarial training

L Li, M Spratling - arxiv preprint arxiv:2301.09879, 2023 - arxiv.org
Adversarial training suffers from the issue of robust overfitting, which seriously impairs its
generalization performance. Data augmentation, which is effective at preventing overfitting …

[HTML][HTML] Understanding and combating robust overfitting via input loss landscape analysis and regularization

L Li, M Spratling - Pattern recognition, 2023 - Elsevier
Adversarial training is widely used to improve the robustness of deep neural networks to
adversarial attack. However, adversarial training is prone to overfitting, and the cause is far …

Better safe than sorry: Preventing delusive adversaries with adversarial training

L Tao, L Feng, J Yi, SJ Huang… - Advances in Neural …, 2021 - proceedings.neurips.cc
Delusive attacks aim to substantially deteriorate the test accuracy of the learning model by
slightly perturbing the features of correctly labeled training examples. By formalizing this …

Machine learning robustness: A primer

HB Braiek, F Khomh - Trustworthy AI in Medical Imaging, 2025 - Elsevier
This chapter explores the foundational concept of robustness in Machine Learning (ML) and
its integral role in establishing trustworthiness in Artificial Intelligence (AI) systems. The …

Sparsity winning twice: Better robust generalization from more efficient training

T Chen, Z Zhang, P Wang, S Balachandra… - arxiv preprint arxiv …, 2022 - arxiv.org
Recent studies demonstrate that deep networks, even robustified by the state-of-the-art
adversarial training (AT), still suffer from large robust generalization gaps, in addition to the …

Adversarial self-supervised learning for robust SAR target recognition

Y Xu, H Sun, J Chen, L Lei, K Ji, G Kuang - Remote Sensing, 2021 - mdpi.com
Synthetic aperture radar (SAR) can perform observations at all times and has been widely
used in the military field. Deep neural network (DNN)-based SAR target recognition models …

Shift from texture-bias to shape-bias: Edge deformation-based augmentation for robust object recognition

X He, Q Lin, C Luo, W **e, S Song… - Proceedings of the …, 2023 - openaccess.thecvf.com
Recent studies have shown the vulnerability of CNNs under perturbation noises, which is
partially caused by the reason that the well-trained CNNs are too biased toward the object …

Reliable Model Watermarking: Defending Against Theft without Compromising on Evasion

H Zhu, S Liang, W Hu, L Fangqi, J Jia… - Proceedings of the 32nd …, 2024 - dl.acm.org
With the rise of Machine Learning as a Service (MLaaS) platforms, safeguarding the
intellectual property of deep learning models is becoming paramount. Among various …