Label poisoning is all you need

R Jha, J Hayase, S Oh - Advances in Neural Information …, 2023 - proceedings.neurips.cc
In a backdoor attack, an adversary injects corrupted data into a model's training dataset in
order to gain control over its predictions on images with a specific attacker-defined trigger. A …

Enhancing fine-tuning based backdoor defense with sharpness-aware minimization

M Zhu, S Wei, L Shen, Y Fan… - Proceedings of the IEEE …, 2023 - openaccess.thecvf.com
Backdoor defense, which aims to detect or mitigate the effect of malicious triggers introduced
by attackers, is becoming increasingly critical for machine learning security and integrity …

Backdoor defense via adaptively splitting poisoned dataset

K Gao, Y Bai, J Gu, Y Yang… - Proceedings of the IEEE …, 2023 - openaccess.thecvf.com
Backdoor defenses have been studied to alleviate the threat of deep neural networks
(DNNs) being backdoor attacked and thus maliciously altered. Since DNNs usually adopt …

Badclip: Trigger-aware prompt learning for backdoor attacks on clip

J Bai, K Gao, S Min, ST **a, Z Li… - Proceedings of the IEEE …, 2024 - openaccess.thecvf.com
Abstract Contrastive Vision-Language Pre-training known as CLIP has shown promising
effectiveness in addressing downstream image recognition tasks. However recent works …

Neural polarizer: A lightweight and effective backdoor defense via purifying poisoned features

M Zhu, S Wei, H Zha, B Wu - Advances in Neural …, 2023 - proceedings.neurips.cc
Recent studies have demonstrated the susceptibility of deep neural networks to backdoor
attacks. Given a backdoored model, its prediction of a poisoned sample with trigger will be …

A survey of bit-flip attacks on deep neural network and corresponding defense methods

C Qian, M Zhang, Y Nie, S Lu, H Cao - Electronics, 2023 - mdpi.com
As the machine learning-related technology has made great progress in recent years, deep
neural networks are widely used in many scenarios, including security-critical ones, which …

A comprehensive study on the robustness of deep learning-based image classification and object detection in remote sensing: Surveying and benchmarking

S Mei, J Lian, X Wang, Y Su, M Ma… - Journal of Remote …, 2024 - spj.science.org
Deep neural networks (DNNs) have found widespread applications in interpreting remote
sensing (RS) imagery. However, it has been demonstrated in previous works that DNNs are …

Not all samples are born equal: Towards effective clean-label backdoor attacks

Y Gao, Y Li, L Zhu, D Wu, Y Jiang, ST **a - Pattern Recognition, 2023 - Elsevier
Recent studies demonstrated that deep neural networks (DNNs) are vulnerable to backdoor
attacks. The attacked model behaves normally on benign samples, while its predictions are …

Black-box dataset ownership verification via backdoor watermarking

Y Li, M Zhu, X Yang, Y Jiang, T Wei… - IEEE Transactions on …, 2023 - ieeexplore.ieee.org
Deep learning, especially deep neural networks (DNNs), has been widely and successfully
adopted in many critical applications for its high effectiveness and efficiency. The rapid …

Inducing high energy-latency of large vision-language models with verbose images

K Gao, Y Bai, J Gu, ST **a, P Torr, Z Li… - arxiv preprint arxiv …, 2024 - arxiv.org
Large vision-language models (VLMs) such as GPT-4 have achieved exceptional
performance across various multi-modal tasks. However, the deployment of VLMs …