Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
A comprehensive survey on poisoning attacks and countermeasures in machine learning
The prosperity of machine learning has been accompanied by increasing attacks on the
training process. Among them, poisoning attacks have become an emerging threat during …
training process. Among them, poisoning attacks have become an emerging threat during …
Wild patterns reloaded: A survey of machine learning security against training data poisoning
The success of machine learning is fueled by the increasing availability of computing power
and large training datasets. The training data is used to learn new models or update existing …
and large training datasets. The training data is used to learn new models or update existing …
On the exploitability of instruction tuning
Instruction tuning is an effective technique to align large language models (LLMs) with
human intent. In this work, we investigate how an adversary can exploit instruction tuning by …
human intent. In this work, we investigate how an adversary can exploit instruction tuning by …
Data collection and quality challenges in deep learning: A data-centric ai perspective
Data-centric AI is at the center of a fundamental shift in software engineering where machine
learning becomes the new software, powered by big data and computing infrastructure …
learning becomes the new software, powered by big data and computing infrastructure …
Anti-backdoor learning: Training clean models on poisoned data
Backdoor attack has emerged as a major security threat to deep neural networks (DNNs).
While existing defense methods have demonstrated promising results on detecting or …
While existing defense methods have demonstrated promising results on detecting or …
Lira: Learnable, imperceptible and robust backdoor attacks
Recently, machine learning models have demonstrated to be vulnerable to backdoor
attacks, primarily due to the lack of transparency in black-box models such as deep neural …
attacks, primarily due to the lack of transparency in black-box models such as deep neural …
Nightshade: Prompt-specific poisoning attacks on text-to-image generative models
Trained on billions of images, diffusion-based text-to-image models seem impervious to
traditional data poisoning attacks, which typically require poison samples approaching 20 …
traditional data poisoning attacks, which typically require poison samples approaching 20 …
Data poisoning attacks against federated learning systems
Federated learning (FL) is an emerging paradigm for distributed training of large-scale deep
neural networks in which participants' data remains on their own devices with only model …
neural networks in which participants' data remains on their own devices with only model …
Hidden trigger backdoor attacks
With the success of deep learning algorithms in various domains, studying adversarial
attacks to secure deep models in real world applications has become an important research …
attacks to secure deep models in real world applications has become an important research …
Truth serum: Poisoning machine learning models to reveal their secrets
We introduce a new class of attacks on machine learning models. We show that an
adversary who can poison a training dataset can cause models trained on this dataset to …
adversary who can poison a training dataset can cause models trained on this dataset to …