Backdoor attacks and countermeasures on deep learning: A comprehensive review
This work provides the community with a timely comprehensive review of backdoor attacks
and countermeasures on deep learning. According to the attacker's capability and affected …
and countermeasures on deep learning. According to the attacker's capability and affected …
Backdoor attacks and defenses targeting multi-domain ai models: A comprehensive review
Since the emergence of security concerns in artificial intelligence (AI), there has been
significant attention devoted to the examination of backdoor attacks. Attackers can utilize …
significant attention devoted to the examination of backdoor attacks. Attackers can utilize …
Toward transparent ai: A survey on interpreting the inner structures of deep neural networks
The last decade of machine learning has seen drastic increases in scale and capabilities.
Deep neural networks (DNNs) are increasingly being deployed in the real world. However …
Deep neural networks (DNNs) are increasingly being deployed in the real world. However …
Untargeted backdoor watermark: Towards harmless and stealthy dataset copyright protection
Y Li, Y Bai, Y Jiang, Y Yang… - Advances in Neural …, 2022 - proceedings.neurips.cc
Deep neural networks (DNNs) have demonstrated their superiority in practice. Arguably, the
rapid development of DNNs is largely benefited from high-quality (open-sourced) datasets …
rapid development of DNNs is largely benefited from high-quality (open-sourced) datasets …
Februus: Input purification defense against trojan attacks on deep neural network systems
We propose Februus; a new idea to neutralize highly potent and insidious Trojan attacks on
Deep Neural Network (DNN) systems at run-time. In Trojan attacks, an adversary activates a …
Deep Neural Network (DNN) systems at run-time. In Trojan attacks, an adversary activates a …
A unified evaluation of textual backdoor learning: Frameworks and benchmarks
Textual backdoor attacks are a kind of practical threat to NLP systems. By injecting a
backdoor in the training phase, the adversary could control model predictions via predefined …
backdoor in the training phase, the adversary could control model predictions via predefined …
Not all samples are born equal: Towards effective clean-label backdoor attacks
Recent studies demonstrated that deep neural networks (DNNs) are vulnerable to backdoor
attacks. The attacked model behaves normally on benign samples, while its predictions are …
attacks. The attacked model behaves normally on benign samples, while its predictions are …
Scale-up: An efficient black-box input-level backdoor detection via analyzing scaled prediction consistency
Deep neural networks (DNNs) are vulnerable to backdoor attacks, where adversaries
embed a hidden backdoor trigger during the training process for malicious prediction …
embed a hidden backdoor trigger during the training process for malicious prediction …
Rap: Robustness-aware perturbations for defending against backdoor attacks on nlp models
Backdoor attacks, which maliciously control a well-trained model's outputs of the instances
with specific triggers, are recently shown to be serious threats to the safety of reusing deep …
with specific triggers, are recently shown to be serious threats to the safety of reusing deep …
Can we use split learning on 1D CNN models for privacy preserving training?
A new collaborative learning, called split learning, was recently introduced, aiming to protect
user data privacy without revealing raw input data to a server. It collaboratively runs a deep …
user data privacy without revealing raw input data to a server. It collaboratively runs a deep …