Backdoorbench: A comprehensive benchmark of backdoor learning B Wu, H Chen, M Zhang, Z Zhu, S Wei, D Yuan, C Shen Advances in Neural Information Processing Systems 35, 10546-10559, 2022 | 135 | 2022 |
Shared adversarial unlearning: Backdoor mitigation by unlearning shared adversarial examples S Wei, M Zhang, H Zha, B Wu Advances in Neural Information Processing Systems 36, 25876-25909, 2023 | 29 | 2023 |
Defenses in adversarial machine learning: A survey B Wu, S Wei, M Zhu, M Zheng, Z Zhu, M Zhang, H Chen, D Yuan, L Liu, ... arXiv preprint arXiv:2312.08890, 2023 | 15 | 2023 |
Vdc: Versatile data cleanser for detecting dirty samples via visual-linguistic inconsistency Z Zhu, M Zhang, S Wei, B Wu, B Wu arXiv preprint arXiv:2309.16211, 2023 | 5 | 2023 |
Boosting backdoor attack with a learnable poisoning sample selection strategy Z Zhu, M Zhang, S Wei, L Shen, Y Fan, B Wu arXiv preprint arXiv:2307.07328, 2023 | 5 | 2023 |
BackdoorBench: A Comprehensive Benchmark and Analysis of Backdoor Learning B Wu, H Chen, M Zhang, Z Zhu, S Wei, D Yuan, M Zhu, R Wang, L Liu, ... arXiv preprint arXiv:2401.15002, 2024 | 4 | 2024 |
VDC: Versatile Data Cleanser based on Visual-Linguistic Inconsistency by Multimodal Large Language Models Z Zhu, M Zhang, S Wei, B Wu, B Wu The Twelfth International Conference on Learning Representations, 2024 | 4 | 2024 |
Activation Gradient based Poisoned Sample Detection Against Backdoor Attacks D Yuan, S Wei, M Zhang, L Liu, B Wu arXiv preprint arXiv:2312.06230, 2023 | 3 | 2023 |
Reliable Poisoned Sample Detection against Backdoor Attacks Enhanced by Sharpness Aware Minimization M Zhang, M Zhu, Z Zhu, B Wu arXiv preprint arXiv:2411.11525, 2024 | | 2024 |
EFFECTIVE FREQUENCY-BASED BACKDOOR ATTACKS WITH LOW POISONING RATIOS D Yuan, M Zhang, S Wei, S Yang, B Wu | | |