Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Wild patterns reloaded: A survey of machine learning security against training data poisoning
The success of machine learning is fueled by the increasing availability of computing power
and large training datasets. The training data is used to learn new models or update existing …
and large training datasets. The training data is used to learn new models or update existing …
On the exploitability of instruction tuning
Instruction tuning is an effective technique to align large language models (LLMs) with
human intent. In this work, we investigate how an adversary can exploit instruction tuning by …
human intent. In this work, we investigate how an adversary can exploit instruction tuning by …
Training data influence analysis and estimation: A survey
Good models require good training data. For overparameterized deep models, the causal
relationship between training data and model predictions is increasingly opaque and poorly …
relationship between training data and model predictions is increasingly opaque and poorly …
Truth serum: Poisoning machine learning models to reveal their secrets
We introduce a new class of attacks on machine learning models. We show that an
adversary who can poison a training dataset can cause models trained on this dataset to …
adversary who can poison a training dataset can cause models trained on this dataset to …
Unlearnable 3D point clouds: Class-wise transformation is all you need
Traditional unlearnable strategies have been proposed to prevent unauthorized users from
training on the 2D image data. With more 3D point cloud data containing sensitivity …
training on the 2D image data. With more 3D point cloud data containing sensitivity …
Sleeper agent: Scalable hidden trigger backdoors for neural networks trained from scratch
As the curation of data for machine learning becomes increasingly automated, dataset
tampering is a mounting threat. Backdoor attackers tamper with training data to embed a …
tampering is a mounting threat. Backdoor attackers tamper with training data to embed a …
The path to defence: A roadmap to characterising data poisoning attacks on victim models
Data Poisoning Attacks (DPA) represent a sophisticated technique aimed at distorting the
training data of machine learning models, thereby manipulating their behavior. This process …
training data of machine learning models, thereby manipulating their behavior. This process …
Robust unlearnable examples: Protecting data against adversarial learning
The tremendous amount of accessible data in cyberspace face the risk of being
unauthorized used for training deep learning models. To address this concern, methods are …
unauthorized used for training deep learning models. To address this concern, methods are …
Cuda: Convolution-based unlearnable datasets
Large-scale training of modern deep learning models heavily relies on publicly available
data on the web. This potentially unauthorized usage of online data leads to concerns …
data on the web. This potentially unauthorized usage of online data leads to concerns …
Image shortcut squeezing: Countering perturbative availability poisons with compression
Perturbative availability poisoning (PAP) adds small changes to images to prevent their use
for model training. Current research adopts the belief that practical and effective approaches …
for model training. Current research adopts the belief that practical and effective approaches …