Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Stability analysis and generalization bounds of adversarial training
In adversarial machine learning, deep neural networks can fit the adversarial examples on
the training dataset but have poor generalization ability on the test set. This phenomenon is …
the training dataset but have poor generalization ability on the test set. This phenomenon is …
Understanding adversarial robustness against on-manifold adversarial examples
Deep neural networks (DNNs) are shown to be vulnerable to adversarial examples. A well-
trained model can be easily attacked by adding small perturbations to the original data. One …
trained model can be easily attacked by adding small perturbations to the original data. One …
Uniformly stable algorithms for adversarial training and beyond
In adversarial machine learning, neural networks suffer from a significant issue known as
robust overfitting, where the robust test accuracy decreases over epochs (Rice et al., 2020) …
robust overfitting, where the robust test accuracy decreases over epochs (Rice et al., 2020) …
Bridging the gap: Rademacher complexity in robust and standard generalization
Abstract Training Deep Neural Networks (DNNs) with adversarial examples often results in
poor generalization to test-time adversarial data. This paper investigates this issue, known …
poor generalization to test-time adversarial data. This paper investigates this issue, known …
A closer look at curriculum adversarial training: from an online perspective
Curriculum adversarial training empirically finds that gradually increasing the hardness of
adversarial examples can further improve the adversarial robustness of the trained model …
adversarial examples can further improve the adversarial robustness of the trained model …
Stability and generalization in free adversarial training
While adversarial training methods have significantly improved the robustness of deep
neural networks against norm-bounded adversarial perturbations, the generalization gap …
neural networks against norm-bounded adversarial perturbations, the generalization gap …
Enhancing adversarial robustness for deep metric learning via neural discrete adversarial training
C Li, Z Zhu, R Niu, Y Zhao - Computers & Security, 2024 - Elsevier
Due to the security concerns arising from adversarial vulnerability in deep metric learning
models, it is essential to enhance their adversarial robustness for secure neural network …
models, it is essential to enhance their adversarial robustness for secure neural network …
RAMP: Boosting Adversarial Robustness Against Multiple Perturbations for Universal Robustness
Most existing works focus on improving robustness against adversarial attacks bounded by
a single $ l_p $ norm using adversarial training (AT). However, these AT models' multiple …
a single $ l_p $ norm using adversarial training (AT). However, these AT models' multiple …
Towards Universal Certified Robustness with Multi-Norm Training
Existing certified training methods can only train models to be robust against a certain
perturbation type (eg $ l_\infty $ or $ l_2 $). However, an $ l_\infty $ certifiably robust model …
perturbation type (eg $ l_\infty $ or $ l_2 $). However, an $ l_\infty $ certifiably robust model …
Improving adversarial training for multiple perturbations through the lens of uniform stability
In adversarial training (AT), most existing works focus on AT with a single type of
perturbation, such as the $\ell_\infty $ attacks. However, deep neural networks (DNNs) are …
perturbation, such as the $\ell_\infty $ attacks. However, deep neural networks (DNNs) are …