Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
A comprehensive survey on poisoning attacks and countermeasures in machine learning
The prosperity of machine learning has been accompanied by increasing attacks on the
training process. Among them, poisoning attacks have become an emerging threat during …
training process. Among them, poisoning attacks have become an emerging threat during …
Wild patterns reloaded: A survey of machine learning security against training data poisoning
The success of machine learning is fueled by the increasing availability of computing power
and large training datasets. The training data is used to learn new models or update existing …
and large training datasets. The training data is used to learn new models or update existing …
Boundary unlearning: Rapid forgetting of deep networks via shifting the decision boundary
The practical needs of the" right to be forgotten" and poisoned data removal call for efficient
machine unlearning techniques, which enable machine learning models to unlearn, or to …
machine unlearning techniques, which enable machine learning models to unlearn, or to …
Truth serum: Poisoning machine learning models to reveal their secrets
We introduce a new class of attacks on machine learning models. We show that an
adversary who can poison a training dataset can cause models trained on this dataset to …
adversary who can poison a training dataset can cause models trained on this dataset to …
Dataset security for machine learning: Data poisoning, backdoor attacks, and defenses
Data poisoning---the process by which an attacker takes control of a model by making
imperceptible changes to a subset of the training data---is an emerging threat in the context …
imperceptible changes to a subset of the training data---is an emerging threat in the context …
Rethinking the backdoor attacks' triggers: A frequency perspective
Backdoor attacks have been considered a severe security threat to deep learning. Such
attacks can make models perform abnormally on inputs with predefined triggers and still …
attacks can make models perform abnormally on inputs with predefined triggers and still …