Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Corrective machine unlearning
Machine Learning models increasingly face data integrity challenges due to the use of large-
scale training datasets drawn from the Internet. We study what model developers can do if …
scale training datasets drawn from the Internet. We study what model developers can do if …
Removing batch normalization boosts adversarial training
Adversarial training (AT) defends deep neural networks against adversarial attacks. One
challenge that limits its practical application is the performance degradation on clean …
challenge that limits its practical application is the performance degradation on clean …
Exploring memorization in adversarial training
Deep learning models have a propensity for fitting the entire training set even with random
labels, which requires memorization of every training sample. In this paper, we explore the …
labels, which requires memorization of every training sample. In this paper, we explore the …
It is all about data: A survey on the effects of data on adversarial robustness
Adversarial examples are inputs to machine learning models that an attacker has
intentionally designed to confuse the model into making a mistake. Such examples pose a …
intentionally designed to confuse the model into making a mistake. Such examples pose a …
Towards adversarial evaluations for inexact machine unlearning
Machine Learning models face increased concerns regarding the storage of personal user
data and adverse impacts of corrupted data like backdoors or systematic bias. Machine …
data and adverse impacts of corrupted data like backdoors or systematic bias. Machine …
How unfair is private learning?
As machine learning algorithms are deployed on sensitive data in critical decision making
processes, it is becoming increasingly important that they are also private and fair. In this …
processes, it is becoming increasingly important that they are also private and fair. In this …
Generalization Ability of Wide Neural Networks on
We perform a study on the generalization ability of the wide two-layer ReLU neural network
on $\mathbb {R} $. We first establish some spectral properties of the neural tangent kernel …
on $\mathbb {R} $. We first establish some spectral properties of the neural tangent kernel …
Provable tradeoffs in adversarially robust classification
It is well known that machine learning methods can be vulnerable to adversarially-chosen
perturbations of their inputs. Despite significant progress in the area, foundational open …
perturbations of their inputs. Despite significant progress in the area, foundational open …
Attack can benefit: An adversarial approach to recognizing facial expressions under noisy annotations
Abstract The real-world Facial Expression Recognition (FER) datasets usually exhibit
complex scenarios with coupled noise annotations and imbalanced classes distribution …
complex scenarios with coupled noise annotations and imbalanced classes distribution …
Benign overfitting in adversarial training of neural networks
Benign overfitting is the phenomenon wherein none of the predictors in the hypothesis class
can achieve perfect accuracy (ie, non-realizable or noisy setting), but a model that …
can achieve perfect accuracy (ie, non-realizable or noisy setting), but a model that …