Neural polarizer: A lightweight and effective backdoor defense via purifying poisoned features
Recent studies have demonstrated the susceptibility of deep neural networks to backdoor
attacks. Given a backdoored model, its prediction of a poisoned sample with trigger will be …
attacks. Given a backdoored model, its prediction of a poisoned sample with trigger will be …
Enhancing fine-tuning based backdoor defense with sharpness-aware minimization
Backdoor defense, which aims to detect or mitigate the effect of malicious triggers introduced
by attackers, is becoming increasingly critical for machine learning security and integrity …
by attackers, is becoming increasingly critical for machine learning security and integrity …
Towards efficient adversarial training on vision transformers
Abstract Vision Transformer (ViT), as a powerful alternative to Convolutional Neural Network
(CNN), has received much attention. Recent work showed that ViTs are also vulnerable to …
(CNN), has received much attention. Recent work showed that ViTs are also vulnerable to …
Boosting the transferability of adversarial attacks with reverse adversarial perturbation
Deep neural networks (DNNs) have been shown to be vulnerable to adversarial examples,
which can produce erroneous predictions by injecting imperceptible perturbations. In this …
which can produce erroneous predictions by injecting imperceptible perturbations. In this …
Prior-guided adversarial initialization for fast adversarial training
Fast adversarial training (FAT) effectively improves the efficiency of standard adversarial
training (SAT). However, initial FAT encounters catastrophic overfitting, ie, the robust …
training (SAT). However, initial FAT encounters catastrophic overfitting, ie, the robust …
Triangle attack: A query-efficient decision-based adversarial attack
Decision-based attack poses a severe threat to real-world applications since it regards the
target model as a black box and only accesses the hard prediction label. Great efforts have …
target model as a black box and only accesses the hard prediction label. Great efforts have …
Revisiting backdoor attacks against large vision-language models
Instruction tuning enhances large vision-language models (LVLMs) but raises security risks
through potential backdoor attacks due to their openness. Previous backdoor studies focus …
through potential backdoor attacks due to their openness. Previous backdoor studies focus …
Towards robust physical-world backdoor attacks on lane detection
Deep learning-based lane detection (LD) plays a critical role in autonomous driving
systems, such as adaptive cruise control. However, it is vulnerable to backdoor attacks …
systems, such as adaptive cruise control. However, it is vulnerable to backdoor attacks …
A large-scale multiple-objective method for black-box attack against object detection
Recent studies have shown that detectors based on deep models are vulnerable to
adversarial examples, even in the black-box scenario where the attacker cannot access the …
adversarial examples, even in the black-box scenario where the attacker cannot access the …
Hide in thicket: Generating imperceptible and rational adversarial perturbations on 3d point clouds
Adversarial attack methods based on point manipulation for 3D point cloud classification
have revealed the fragility of 3D models yet the adversarial examples they produce are …
have revealed the fragility of 3D models yet the adversarial examples they produce are …