Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
A survey of neural trojan attacks and defenses in deep learning
Artificial Intelligence (AI) relies heavily on deep learning-a technology that is becoming
increasingly popular in real-life applications of AI, even in the safety-critical and high-risk …
increasingly popular in real-life applications of AI, even in the safety-critical and high-risk …
Single image backdoor inversion via robust smoothed classifiers
Backdoor inversion, the process of finding a backdoor trigger inserted into a machine
learning model, has become the pillar of many backdoor detection and defense methods …
learning model, has become the pillar of many backdoor detection and defense methods …
Don't FREAK Out: A Frequency-Inspired Approach to Detecting Backdoor Poisoned Samples in DNNs
In this paper we investigate the frequency sensitivity of Deep Neural Networks (DNNs) when
presented with clean samples versus poisoned samples. Our analysis shows significant …
presented with clean samples versus poisoned samples. Our analysis shows significant …
Check your other door! Creating backdoor attacks in the frequency domain
Deep Neural Networks (DNNs) are ubiquitous and span a variety of applications ranging
from image classification to real-time object detection. As DNN models become more …
from image classification to real-time object detection. As DNN models become more …
Selective amnesia: On efficient, high-fidelity and blind suppression of backdoor effects in trojaned machine learning models
The extensive applications of deep neural network (DNN) and its increasingly complicated
architecture and supply chain make the risk of backdoor attacks more realistic than ever. In …
architecture and supply chain make the risk of backdoor attacks more realistic than ever. In …
Accumulative poisoning attacks on real-time data
Collecting training data from untrusted sources exposes machine learning services to
poisoning adversaries, who maliciously manipulate training data to degrade the model …
poisoning adversaries, who maliciously manipulate training data to degrade the model …
Backdoor Attack and Defense on Deep Learning: A Survey
Y Bai, G **ng, H Wu, Z Rao, C Ma… - IEEE Transactions …, 2024 - ieeexplore.ieee.org
Deep learning, as an important branch of machine learning, has been widely applied in
computer vision, natural language processing, speech recognition, and more. However …
computer vision, natural language processing, speech recognition, and more. However …
Use procedural noise to achieve backdoor attack
X Chen, Y Ma, S Lu - IEEE Access, 2021 - ieeexplore.ieee.org
In recent years, more researchers pay their attention to the security of artificial intelligence.
The backdoor attack is one of the threats and has a powerful, stealthy attack ability. There …
The backdoor attack is one of the threats and has a powerful, stealthy attack ability. There …
Preference Poisoning Attacks on Reward Model Learning
Learning utility, or reward, models from pairwise comparisons is a fundamental component
in a number of application domains. These approaches inherently entail collecting …
in a number of application domains. These approaches inherently entail collecting …
Engravings, Secrets, and Interpretability of Neural Networks
This work proposes a definition and examines the problem of undetectably engraving
special input/output information into a Neural Network (NN). Investigation of this problem is …
special input/output information into a Neural Network (NN). Investigation of this problem is …