Parameter-efficient fine-tuning for pre-trained vision models: A survey
Large-scale pre-trained vision models (PVMs) have shown great potential for adaptability
across various downstream vision tasks. However, with state-of-the-art PVMs growing to …
across various downstream vision tasks. However, with state-of-the-art PVMs growing to …
Convolutional Prompting meets Language Models for Continual Learning
Continual Learning (CL) enables machine learning models to learn from continuously
shifting new training data in absence of data from old tasks. Recently pre-trained vision …
shifting new training data in absence of data from old tasks. Recently pre-trained vision …
Beyond model adaptation at test time: A survey
Machine learning algorithms have achieved remarkable success across various disciplines,
use cases and applications, under the prevailing assumption that training and test samples …
use cases and applications, under the prevailing assumption that training and test samples …
Proactive schemes: A survey of adversarial attacks for social good
Adversarial attacks in computer vision exploit the vulnerabilities of machine learning models
by introducing subtle perturbations to input data, often leading to incorrect predictions or …
by introducing subtle perturbations to input data, often leading to incorrect predictions or …
Test-time adaptation meets image enhancement: Improving accuracy via uncertainty-aware logit switching
Deep neural networks have achieved remarkable success in a variety of computer vision
applications. However, there is a problem of degrading accuracy when the data distribution …
applications. However, there is a problem of degrading accuracy when the data distribution …
Differentiable Prompt Learning for Vision Language Models
Prompt learning is an effective way to exploit the potential of large-scale pre-trained
foundational models. Continuous prompts parameterize context tokens in prompts by turning …
foundational models. Continuous prompts parameterize context tokens in prompts by turning …
Parameter-Efficient Fine-Tuning for Foundation Models
D Zhang, T Feng, L Xue, Y Wang, Y Dong… - arxiv preprint arxiv …, 2025 - arxiv.org
This survey delves into the realm of Parameter-Efficient Fine-Tuning (PEFT) within the
context of Foundation Models (FMs). PEFT, a cost-effective fine-tuning technique, minimizes …
context of Foundation Models (FMs). PEFT, a cost-effective fine-tuning technique, minimizes …
High-Resolution Be Aware! Improving the Self-Supervised Real-World Super-Resolution
Self-supervised learning is crucial for super-resolution because ground-truth images are
usually unavailable for real-world settings. Existing methods derive self-supervision from low …
usually unavailable for real-world settings. Existing methods derive self-supervision from low …
Convolutional Prompting for Broad-Domain Retinal Vessel Segmentation
Previous research on retinal vessel segmentation is targeted at a specific image domain,
mostly color fundus photography (CFP). In this paper we make a brave attempt to attack a …
mostly color fundus photography (CFP). In this paper we make a brave attempt to attack a …
Prompt Distribution Matters: Tuning Visual Prompt Through Semantic Metric Guidance
L Ren, C Chen, L Wang, KA Hua - openreview.net
Visual Prompt Tuning (VPT) has become a promising solution for Parameter-Efficient Fine-
Tuning (PEFT) of pre-trained Vision Transformer (ViT) models on downstream vision tasks …
Tuning (PEFT) of pre-trained Vision Transformer (ViT) models on downstream vision tasks …