Parameter-efficient fine-tuning for pre-trained vision models: A survey

Y **n, S Luo, H Zhou, J Du, X Liu, Y Fan, Q Li… - arxiv preprint arxiv …, 2024 - arxiv.org
Large-scale pre-trained vision models (PVMs) have shown great potential for adaptability
across various downstream vision tasks. However, with state-of-the-art PVMs growing to …

Convolutional Prompting meets Language Models for Continual Learning

A Roy, R Moulick, VK Verma… - Proceedings of the …, 2024 - openaccess.thecvf.com
Continual Learning (CL) enables machine learning models to learn from continuously
shifting new training data in absence of data from old tasks. Recently pre-trained vision …

Beyond model adaptation at test time: A survey

Z **ao, CGM Snoek - arxiv preprint arxiv:2411.03687, 2024 - arxiv.org
Machine learning algorithms have achieved remarkable success across various disciplines,
use cases and applications, under the prevailing assumption that training and test samples …

Proactive schemes: A survey of adversarial attacks for social good

V Asnani, X Yin, X Liu - arxiv preprint arxiv:2409.16491, 2024 - arxiv.org
Adversarial attacks in computer vision exploit the vulnerabilities of machine learning models
by introducing subtle perturbations to input data, often leading to incorrect predictions or …

Test-time adaptation meets image enhancement: Improving accuracy via uncertainty-aware logit switching

S Enomoto, N Hasegawa, K Adachi… - … Joint Conference on …, 2024 - ieeexplore.ieee.org
Deep neural networks have achieved remarkable success in a variety of computer vision
applications. However, there is a problem of degrading accuracy when the data distribution …

Differentiable Prompt Learning for Vision Language Models

Z Huang, T Pedapati, PY Chen, J Gao - arxiv preprint arxiv:2501.00457, 2024 - arxiv.org
Prompt learning is an effective way to exploit the potential of large-scale pre-trained
foundational models. Continuous prompts parameterize context tokens in prompts by turning …

Parameter-Efficient Fine-Tuning for Foundation Models

D Zhang, T Feng, L Xue, Y Wang, Y Dong… - arxiv preprint arxiv …, 2025 - arxiv.org
This survey delves into the realm of Parameter-Efficient Fine-Tuning (PEFT) within the
context of Foundation Models (FMs). PEFT, a cost-effective fine-tuning technique, minimizes …

High-Resolution Be Aware! Improving the Self-Supervised Real-World Super-Resolution

Y Zhang, A Yao - arxiv preprint arxiv:2411.16175, 2024 - arxiv.org
Self-supervised learning is crucial for super-resolution because ground-truth images are
usually unavailable for real-world settings. Existing methods derive self-supervision from low …

Convolutional Prompting for Broad-Domain Retinal Vessel Segmentation

Q Wei, W Yu, X Li - arxiv preprint arxiv:2412.18089, 2024 - arxiv.org
Previous research on retinal vessel segmentation is targeted at a specific image domain,
mostly color fundus photography (CFP). In this paper we make a brave attempt to attack a …

Prompt Distribution Matters: Tuning Visual Prompt Through Semantic Metric Guidance

L Ren, C Chen, L Wang, KA Hua - openreview.net
Visual Prompt Tuning (VPT) has become a promising solution for Parameter-Efficient Fine-
Tuning (PEFT) of pre-trained Vision Transformer (ViT) models on downstream vision tasks …