A survey of adversarial attack and defense methods for malware classification in cyber security

S Yan, J Ren, W Wang, L Sun… - … Surveys & Tutorials, 2022 - ieeexplore.ieee.org
Malware poses a severe threat to cyber security. Attackers use malware to achieve their
malicious purposes, such as unauthorized access, stealing confidential data, blackmailing …

Robustbench: a standardized adversarial robustness benchmark

F Croce, M Andriushchenko, V Sehwag… - arxiv preprint arxiv …, 2020 - arxiv.org
As a research community, we are still lacking a systematic understanding of the progress on
adversarial robustness which often makes it hard to identify the most promising ideas in …

A survey of attacks on large vision-language models: Resources, advances, and future trends

D Liu, M Yang, X Qu, P Zhou, Y Cheng… - arxiv preprint arxiv …, 2024 - arxiv.org
With the significant development of large models in recent years, Large Vision-Language
Models (LVLMs) have demonstrated remarkable capabilities across a wide range of …

Square attack: a query-efficient black-box adversarial attack via random search

M Andriushchenko, F Croce, N Flammarion… - European conference on …, 2020 - Springer
Abstract We propose the Square Attack, a score-based black-box l_2 l 2-and l_ ∞ l∞-
adversarial attack that does not rely on local gradient information and thus is not affected by …

A fourier perspective on model robustness in computer vision

D Yin, R Gontijo Lopes, J Shlens… - Advances in Neural …, 2019 - proceedings.neurips.cc
Achieving robustness to distributional shift is a longstanding and challenging goal of
computer vision. Data augmentation is a commonly used approach for improving …

A survey on safety-critical driving scenario generation—A methodological perspective

W Ding, C Xu, M Arief, H Lin, B Li… - IEEE Transactions on …, 2023 - ieeexplore.ieee.org
Autonomous driving systems have witnessed significant development during the past years
thanks to the advance in machine learning-enabled sensing and decision-making …

Structure invariant transformation for better adversarial transferability

X Wang, Z Zhang, J Zhang - Proceedings of the IEEE/CVF …, 2023 - openaccess.thecvf.com
Given the severe vulnerability of Deep Neural Networks (DNNs) against adversarial
examples, there is an urgent need for an effective adversarial attack to identify the …

Explaining in style: Training a gan to explain a classifier in stylespace

O Lang, Y Gandelsman, M Yarom… - Proceedings of the …, 2021 - openaccess.thecvf.com
Image classification models can depend on multiple different semantic attributes of the
image. An explanation of the decision of the classifier needs to both discover and visualize …

Evading deepfake detectors via adversarial statistical consistency

Y Hou, Q Guo, Y Huang, X **e… - Proceedings of the …, 2023 - openaccess.thecvf.com
In recent years, as various realistic face forgery techniques known as DeepFake improves
by leaps and bounds, more and more DeepFake detection techniques have been proposed …

Improving the transferability of adversarial samples by path-augmented method

J Zhang, J Huang, W Wang, Y Li… - Proceedings of the …, 2023 - openaccess.thecvf.com
Deep neural networks have achieved unprecedented success on diverse vision tasks.
However, they are vulnerable to adversarial noise that is imperceptible to humans. This …