Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Wild patterns reloaded: A survey of machine learning security against training data poisoning
The success of machine learning is fueled by the increasing availability of computing power
and large training datasets. The training data is used to learn new models or update existing …
and large training datasets. The training data is used to learn new models or update existing …
Backdoor attacks and countermeasures on deep learning: A comprehensive review
This work provides the community with a timely comprehensive review of backdoor attacks
and countermeasures on deep learning. According to the attacker's capability and affected …
and countermeasures on deep learning. According to the attacker's capability and affected …
Poisoning web-scale training datasets is practical
N Carlini, M Jagielski… - … IEEE Symposium on …, 2024 - ieeexplore.ieee.org
Deep learning models are often trained on distributed, web-scale datasets crawled from the
internet. In this paper, we introduce two new dataset poisoning attacks that intentionally …
internet. In this paper, we introduce two new dataset poisoning attacks that intentionally …
Toward transparent ai: A survey on interpreting the inner structures of deep neural networks
The last decade of machine learning has seen drastic increases in scale and capabilities.
Deep neural networks (DNNs) are increasingly being deployed in the real world. However …
Deep neural networks (DNNs) are increasingly being deployed in the real world. However …
Narcissus: A practical clean-label backdoor attack with limited information
Backdoor attacks introduce manipulated data into a machine learning model's training set,
causing the model to misclassify inputs with a trigger during testing to achieve a desired …
causing the model to misclassify inputs with a trigger during testing to achieve a desired …
Invisible backdoor attack with sample-specific triggers
Recently, backdoor attacks pose a new security threat to the training process of deep neural
networks (DNNs). Attackers intend to inject hidden backdoors into DNNs, such that the …
networks (DNNs). Attackers intend to inject hidden backdoors into DNNs, such that the …
Backdoor learning: A survey
Backdoor attack intends to embed hidden backdoors into deep neural networks (DNNs), so
that the attacked models perform well on benign samples, whereas their predictions will be …
that the attacked models perform well on benign samples, whereas their predictions will be …
Reconstructive neuron pruning for backdoor defense
Deep neural networks (DNNs) have been found to be vulnerable to backdoor attacks,
raising security concerns about their deployment in mission-critical applications. While …
raising security concerns about their deployment in mission-critical applications. While …
Backdoor defense via decoupling the training process
Recent studies have revealed that deep neural networks (DNNs) are vulnerable to backdoor
attacks, where attackers embed hidden backdoors in the DNN model by poisoning a few …
attacks, where attackers embed hidden backdoors in the DNN model by poisoning a few …
Adversarial unlearning of backdoors via implicit hypergradient
We propose a minimax formulation for removing backdoors from a given poisoned model
based on a small set of clean data. This formulation encompasses much of prior work on …
based on a small set of clean data. This formulation encompasses much of prior work on …