Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
A comprehensive survey on poisoning attacks and countermeasures in machine learning
The prosperity of machine learning has been accompanied by increasing attacks on the
training process. Among them, poisoning attacks have become an emerging threat during …
training process. Among them, poisoning attacks have become an emerging threat during …
An overview of backdoor attacks against deep neural networks and possible defences
Together with impressive advances touching every aspect of our society, AI technology
based on Deep Neural Networks (DNN) is bringing increasing security concerns. While …
based on Deep Neural Networks (DNN) is bringing increasing security concerns. While …
Adversarial neuron pruning purifies backdoored deep models
As deep neural networks (DNNs) are growing larger, their requirements for computational
resources become huge, which makes outsourcing training more popular. Training in a third …
resources become huge, which makes outsourcing training more popular. Training in a third …
How to backdoor diffusion models?
Diffusion models are state-of-the-art deep learning empowered generative models that are
trained based on the principle of learning forward and reverse diffusion processes via …
trained based on the principle of learning forward and reverse diffusion processes via …
Invisible backdoor attack with sample-specific triggers
Recently, backdoor attacks pose a new security threat to the training process of deep neural
networks (DNNs). Attackers intend to inject hidden backdoors into DNNs, such that the …
networks (DNNs). Attackers intend to inject hidden backdoors into DNNs, such that the …
Backdoor learning: A survey
Backdoor attack intends to embed hidden backdoors into deep neural networks (DNNs), so
that the attacked models perform well on benign samples, whereas their predictions will be …
that the attacked models perform well on benign samples, whereas their predictions will be …
Adversarial unlearning of backdoors via implicit hypergradient
We propose a minimax formulation for removing backdoors from a given poisoned model
based on a small set of clean data. This formulation encompasses much of prior work on …
based on a small set of clean data. This formulation encompasses much of prior work on …
Detecting backdoors in pre-trained encoders
Self-supervised learning in computer vision trains on unlabeled data, such as images or
(image, text) pairs, to obtain an image encoder that learns high-quality embeddings for input …
(image, text) pairs, to obtain an image encoder that learns high-quality embeddings for input …
Dataset security for machine learning: Data poisoning, backdoor attacks, and defenses
As machine learning systems grow in scale, so do their training data requirements, forcing
practitioners to automate and outsource the curation of training data in order to achieve state …
practitioners to automate and outsource the curation of training data in order to achieve state …
A unified evaluation of textual backdoor learning: Frameworks and benchmarks
Textual backdoor attacks are a kind of practical threat to NLP systems. By injecting a
backdoor in the training phase, the adversary could control model predictions via predefined …
backdoor in the training phase, the adversary could control model predictions via predefined …