Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Stability analysis and generalization bounds of adversarial training
In adversarial machine learning, deep neural networks can fit the adversarial examples on
the training dataset but have poor generalization ability on the test set. This phenomenon is …
the training dataset but have poor generalization ability on the test set. This phenomenon is …
On the adversarial robustness of out-of-distribution generalization models
Abstract Out-of-distribution (OOD) generalization has attracted increasing research attention
in recent years, due to its promising experimental results in real-world applications …
in recent years, due to its promising experimental results in real-world applications …
Pac-bayesian spectrally-normalized bounds for adversarially robust generalization
Deep neural networks (DNNs) are vulnerable to adversarial attacks. It is found empirically
that adversarially robust generalization is crucial in establishing defense algorithms against …
that adversarially robust generalization is crucial in establishing defense algorithms against …
Understanding adversarial robustness against on-manifold adversarial examples
Deep neural networks (DNNs) are shown to be vulnerable to adversarial examples. A well-
trained model can be easily attacked by adding small perturbations to the original data. One …
trained model can be easily attacked by adding small perturbations to the original data. One …
Adversarially robust hypothesis transfer learning
In this work, we explore Hypothesis Transfer Learning (HTL) under adversarial attacks. In
this setting, a learner has access to a training dataset of size $ n $ from an underlying …
this setting, a learner has access to a training dataset of size $ n $ from an underlying …
Stability and Generalization of Adversarial Training for Shallow Neural Networks with Smooth Activation
Adversarial training has emerged as a popular approach for training models that are robust
to inference-time adversarial attacks. However, our theoretical understanding of why and …
to inference-time adversarial attacks. However, our theoretical understanding of why and …
Uniformly stable algorithms for adversarial training and beyond
In adversarial machine learning, neural networks suffer from a significant issue known as
robust overfitting, where the robust test accuracy decreases over epochs (Rice et al., 2020) …
robust overfitting, where the robust test accuracy decreases over epochs (Rice et al., 2020) …
Regularization for adversarial robust learning
Despite the growing prevalence of artificial neural networks in real-world applications, their
vulnerability to adversarial attacks remains a significant concern, which motivates us to …
vulnerability to adversarial attacks remains a significant concern, which motivates us to …
Transformed low-rank parameterization can help robust generalization for tensor neural networks
Multi-channel learning has gained significant attention in recent applications, where neural
networks with t-product layers (t-NNs) have shown promising performance through novel …
networks with t-product layers (t-NNs) have shown promising performance through novel …
Bridging the gap: Rademacher complexity in robust and standard generalization
Abstract Training Deep Neural Networks (DNNs) with adversarial examples often results in
poor generalization to test-time adversarial data. This paper investigates this issue, known …
poor generalization to test-time adversarial data. This paper investigates this issue, known …