CDTA: a cross-domain transfer-based attack with contrastive learning

Z Li, W Wu, Y Su, Z Zheng, MR Lyu - … of the AAAI Conference on Artificial …, 2023 - ojs.aaai.org
Despite the excellent performance, deep neural networks (DNNs) have been shown to be
vulnerable to adversarial examples. Besides, these examples are often transferable among …

Adversarial attacks on foundational vision models

N Inkawhich, G McDonald, R Luley - ar** large, pretrained, task-agnostic foundational
vision models such as CLIP, ALIGN, DINOv2, etc. In fact, we are approaching the point …

Improving out-of-distribution detection by learning from the deployment environment

N Inkawhich, J Zhang, EK Davis… - IEEE Journal of …, 2022 - ieeexplore.ieee.org
Recognition systems in the remote sensing domain often operate in “open-world”
environments, where they must be capable of accurately classifying data from the in …

Sok: Pitfalls in evaluating black-box attacks

F Suya, A Suri, T Zhang, J Hong… - … IEEE Conference on …, 2024 - ieeexplore.ieee.org
Numerous works study black-box attacks on image classifiers, where adversaries generate
adversarial examples against unknown target models without having access to their internal …

PFFAA: Prototype-based Feature and Frequency Alteration Attack for Semantic Segmentation

Z Yu, Z Shi, X Liu, W Yang - Proceedings of the 32nd ACM International …, 2024 - dl.acm.org
Recent research has confirmed the possibility of adversarial attacks on deep models.
However, these methods typically assume that the surrogate model has access to the target …

Adversarial Attack across Datasets

Y Qin, Y **ong, J Yi, L Cao, CJ Hsieh - arxiv preprint arxiv:2110.07718, 2021 - arxiv.org
Existing transfer attack methods commonly assume that the attacker knows the training set
(eg, the label set, the input size) of the black-box victim models, which is usually unrealistic …