A comprehensive survey on test-time adaptation under distribution shifts

J Liang, R He, T Tan - International Journal of Computer Vision, 2025 - Springer
Abstract Machine learning methods strive to acquire a robust model during the training
process that can effectively generalize to test samples, even in the presence of distribution …

Memo: Test time robustness via adaptation and augmentation

M Zhang, S Levine, C Finn - Advances in neural information …, 2022 - proceedings.neurips.cc
While deep neural networks can attain good accuracy on in-distribution test points, many
applications require robustness even in the face of unexpected perturbations in the input …

Adashield: Safeguarding multimodal large language models from structure-based attack via adaptive shield prompting

Y Wang, X Liu, Y Li, M Chen, C **ao - European Conference on Computer …, 2024 - Springer
With the advent and widespread deployment of Multimodal Large Language Models
(MLLMs), the imperative to ensure their safety has become increasingly pronounced …

How deep learning sees the world: A survey on adversarial attacks & defenses

JC Costa, T Roxo, H Proença, PRM Inácio - IEEE Access, 2024 - ieeexplore.ieee.org
Deep Learning is currently used to perform multiple tasks, such as object recognition, face
recognition, and natural language processing. However, Deep Neural Networks (DNNs) are …

DISCO: Adversarial defense with local implicit functions

CH Ho, N Vasconcelos - Advances in Neural Information …, 2022 - proceedings.neurips.cc
The problem of adversarial defenses for image classification, where the goal is to robustify a
classifier against adversarial examples, is considered. Inspired by the hypothesis that these …

Visual prompting for adversarial robustness

A Chen, P Lorenz, Y Yao, PY Chen… - ICASSP 2023-2023 …, 2023 - ieeexplore.ieee.org
In this work, we leverage visual prompting (VP) to improve adversarial robustness of a fixed,
pre-trained model at test time. Compared to conventional adversarial defenses, VP allows …

Convolutional visual prompt for robust visual perception

YY Tsai, C Mao, J Yang - Advances in Neural Information …, 2023 - proceedings.neurips.cc
Vision models are often vulnerable to out-of-distribution (OOD) samples without adapting.
While visual prompts offer a lightweight method of input-space adaptation for large-scale …

Defenses in adversarial machine learning: A survey

B Wu, S Wei, M Zhu, M Zheng, Z Zhu, M Zhang… - arxiv preprint arxiv …, 2023 - arxiv.org
Adversarial phenomenon has been widely observed in machine learning (ML) systems,
especially in those using deep neural networks, describing that ML systems may produce …

Taxonomy-structured domain adaptation

T Liu, Z Xu, H He, GY Hao, GH Lee… - … on Machine Learning, 2023 - proceedings.mlr.press
Abstract Domain adaptation aims to mitigate distribution shifts among different domains.
However, traditional formulations are mostly limited to categorical domains, greatly …

GDA: Generalized Diffusion for Robust Test-time Adaptation

YY Tsai, FC Chen, AYC Chen, J Yang… - Proceedings of the …, 2024 - openaccess.thecvf.com
Abstract Machine learning models face generalization challenges when exposed to out-of-
distribution (OOD) samples with unforeseen distribution shifts. Recent research reveals that …