Revisiting gradient regularization: Inject robust saliency-aware weight bias for adversarial defense

Q Li, Q Hu, C Lin, D Wu, C Shen - IEEE Transactions on …, 2023 - ieeexplore.ieee.org
Despite regularizing the Jacobians of neural networks to enhance model robustness has
directly theoretical correlation with model prediction stability, a large defense performance …

Supervised robustness-preserving data-free neural network pruning

MH Meng, G Bai, SG Teo… - 2023 27th International …, 2023 - ieeexplore.ieee.org
When deploying pre-trained neural network models in real-world applications, model
consumers often encounter resource-constraint platforms such as mobile and smart devices …

Revisiting single-step adversarial training for robustness and generalization

Z Li, D Yu, M Wu, S Chan, H Yu, Z Han - Pattern Recognition, 2024 - Elsevier
Recently, single-step adversarial training has received high attention because it shows
robustness and efficiency. However, a phenomenon referred to as “catastrophic overfitting” …

Plug-and-pipeline: Efficient regularization for single-step adversarial training

BS Vivek, A Revanur, N Venkat… - 2020 IEEE/CVF …, 2020 - ieeexplore.ieee.org
Adversarial Training (AT) is a straight forward solution to learn robust models by augmenting
the training mini-batches with adversarial samples. Adversarial attack methods range from …

[PDF][PDF] Paoding: Supervised Robustness-preserving Data-free Neural Network Pruning.

MH Meng, G Bai, SG Teo, JS Dong - arxiv preprint arxiv …, 2022 - researchgate.net
When deploying pre-trained neural network models in real-world applications, model
consumers often encounter resource-constraint platforms such as mobile and smart devices …