Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Wild patterns reloaded: A survey of machine learning security against training data poisoning
The success of machine learning is fueled by the increasing availability of computing power
and large training datasets. The training data is used to learn new models or update existing …
and large training datasets. The training data is used to learn new models or update existing …
AI security for geoscience and remote sensing: Challenges and future trends
Recent advances in artificial intelligence (AI) have significantly intensified research in the
geoscience and remote sensing (RS) field. AI algorithms, especially deep learning-based …
geoscience and remote sensing (RS) field. AI algorithms, especially deep learning-based …
Poisoning web-scale training datasets is practical
N Carlini, M Jagielski… - … IEEE Symposium on …, 2024 - ieeexplore.ieee.org
Deep learning models are often trained on distributed, web-scale datasets crawled from the
internet. In this paper, we introduce two new dataset poisoning attacks that intentionally …
internet. In this paper, we introduce two new dataset poisoning attacks that intentionally …
Adversarial neuron pruning purifies backdoored deep models
As deep neural networks (DNNs) are growing larger, their requirements for computational
resources become huge, which makes outsourcing training more popular. Training in a third …
resources become huge, which makes outsourcing training more popular. Training in a third …
Badclip: Dual-embedding guided backdoor attack on multimodal contrastive learning
While existing backdoor attacks have successfully infected multimodal contrastive learning
models such as CLIP they can be easily countered by specialized backdoor defenses for …
models such as CLIP they can be easily countered by specialized backdoor defenses for …
Narcissus: A practical clean-label backdoor attack with limited information
Backdoor attacks introduce manipulated data into a machine learning model's training set,
causing the model to misclassify inputs with a trigger during testing to achieve a desired …
causing the model to misclassify inputs with a trigger during testing to achieve a desired …
Lira: Learnable, imperceptible and robust backdoor attacks
Recently, machine learning models have demonstrated to be vulnerable to backdoor
attacks, primarily due to the lack of transparency in black-box models such as deep neural …
attacks, primarily due to the lack of transparency in black-box models such as deep neural …
Backdoorbench: A comprehensive benchmark of backdoor learning
Backdoor learning is an emerging and vital topic for studying deep neural networks'
vulnerability (DNNs). Many pioneering backdoor attack and defense methods are being …
vulnerability (DNNs). Many pioneering backdoor attack and defense methods are being …
Label poisoning is all you need
In a backdoor attack, an adversary injects corrupted data into a model's training dataset in
order to gain control over its predictions on images with a specific attacker-defined trigger. A …
order to gain control over its predictions on images with a specific attacker-defined trigger. A …
Invisible backdoor attack with sample-specific triggers
Recently, backdoor attacks pose a new security threat to the training process of deep neural
networks (DNNs). Attackers intend to inject hidden backdoors into DNNs, such that the …
networks (DNNs). Attackers intend to inject hidden backdoors into DNNs, such that the …