Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Dataset distillation: A comprehensive review
Recent success of deep learning is largely attributed to the sheer amount of data used for
training deep neural networks. Despite the unprecedented success, the massive data …
training deep neural networks. Despite the unprecedented success, the massive data …
Wild patterns reloaded: A survey of machine learning security against training data poisoning
The success of machine learning is fueled by the increasing availability of computing power
and large training datasets. The training data is used to learn new models or update existing …
and large training datasets. The training data is used to learn new models or update existing …
Anti-backdoor learning: Training clean models on poisoned data
Backdoor attack has emerged as a major security threat to deep neural networks (DNNs).
While existing defense methods have demonstrated promising results on detecting or …
While existing defense methods have demonstrated promising results on detecting or …
Backdoor learning: A survey
Backdoor attack intends to embed hidden backdoors into deep neural networks (DNNs), so
that the attacked models perform well on benign samples, whereas their predictions will be …
that the attacked models perform well on benign samples, whereas their predictions will be …
Reconstructive neuron pruning for backdoor defense
Deep neural networks (DNNs) have been found to be vulnerable to backdoor attacks,
raising security concerns about their deployment in mission-critical applications. While …
raising security concerns about their deployment in mission-critical applications. While …
Privacy and robustness in federated learning: Attacks and defenses
As data are increasingly being stored in different silos and societies becoming more aware
of data privacy issues, the traditional centralized training of artificial intelligence (AI) models …
of data privacy issues, the traditional centralized training of artificial intelligence (AI) models …
“real attackers don't compute gradients”: bridging the gap between adversarial ml research and practice
Recent years have seen a proliferation of research on adversarial machine learning.
Numerous papers demonstrate powerful algorithmic attacks against a wide variety of …
Numerous papers demonstrate powerful algorithmic attacks against a wide variety of …
Revisiting the assumption of latent separability for backdoor defenses
Recent studies revealed that deep learning is susceptible to backdoor poisoning attacks. An
adversary can embed a hidden backdoor into a model to manipulate its predictions by only …
adversary can embed a hidden backdoor into a model to manipulate its predictions by only …
Badencoder: Backdoor attacks to pre-trained encoders in self-supervised learning
Self-supervised learning in computer vision aims to pre-train an image encoder using a
large amount of unlabeled images or (image, text) pairs. The pre-trained image encoder can …
large amount of unlabeled images or (image, text) pairs. The pre-trained image encoder can …
Blind backdoors in deep learning models
We investigate a new method for injecting backdoors into machine learning models, based
on compromising the loss-value computation in the model-training code. We use it to …
on compromising the loss-value computation in the model-training code. We use it to …