A comprehensive survey on test-time adaptation under distribution shifts
Abstract Machine learning methods strive to acquire a robust model during the training
process that can effectively generalize to test samples, even in the presence of distribution …
process that can effectively generalize to test samples, even in the presence of distribution …
Memo: Test time robustness via adaptation and augmentation
While deep neural networks can attain good accuracy on in-distribution test points, many
applications require robustness even in the face of unexpected perturbations in the input …
applications require robustness even in the face of unexpected perturbations in the input …
Adashield: Safeguarding multimodal large language models from structure-based attack via adaptive shield prompting
With the advent and widespread deployment of Multimodal Large Language Models
(MLLMs), the imperative to ensure their safety has become increasingly pronounced …
(MLLMs), the imperative to ensure their safety has become increasingly pronounced …
How deep learning sees the world: A survey on adversarial attacks & defenses
Deep Learning is currently used to perform multiple tasks, such as object recognition, face
recognition, and natural language processing. However, Deep Neural Networks (DNNs) are …
recognition, and natural language processing. However, Deep Neural Networks (DNNs) are …
DISCO: Adversarial defense with local implicit functions
The problem of adversarial defenses for image classification, where the goal is to robustify a
classifier against adversarial examples, is considered. Inspired by the hypothesis that these …
classifier against adversarial examples, is considered. Inspired by the hypothesis that these …
Visual prompting for adversarial robustness
In this work, we leverage visual prompting (VP) to improve adversarial robustness of a fixed,
pre-trained model at test time. Compared to conventional adversarial defenses, VP allows …
pre-trained model at test time. Compared to conventional adversarial defenses, VP allows …
Convolutional visual prompt for robust visual perception
Vision models are often vulnerable to out-of-distribution (OOD) samples without adapting.
While visual prompts offer a lightweight method of input-space adaptation for large-scale …
While visual prompts offer a lightweight method of input-space adaptation for large-scale …
Defenses in adversarial machine learning: A survey
Adversarial phenomenon has been widely observed in machine learning (ML) systems,
especially in those using deep neural networks, describing that ML systems may produce …
especially in those using deep neural networks, describing that ML systems may produce …
Taxonomy-structured domain adaptation
Abstract Domain adaptation aims to mitigate distribution shifts among different domains.
However, traditional formulations are mostly limited to categorical domains, greatly …
However, traditional formulations are mostly limited to categorical domains, greatly …
GDA: Generalized Diffusion for Robust Test-time Adaptation
Abstract Machine learning models face generalization challenges when exposed to out-of-
distribution (OOD) samples with unforeseen distribution shifts. Recent research reveals that …
distribution (OOD) samples with unforeseen distribution shifts. Recent research reveals that …