Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Robust fast adaptation from adversarially explicit task distribution generation
Meta-learning is a practical learning paradigm to transfer skills across tasks from a few
examples. Nevertheless, the existence of task distribution shifts tends to weaken meta …
examples. Nevertheless, the existence of task distribution shifts tends to weaken meta …
Beyond model adaptation at test time: A survey
Machine learning algorithms have achieved remarkable success across various disciplines,
use cases and applications, under the prevailing assumption that training and test samples …
use cases and applications, under the prevailing assumption that training and test samples …
A study of test-time contrastive concepts for open-world, open-vocabulary semantic segmentation
Recent VLMs, pre-trained on large amounts of image-text pairs to align both modalities,
have opened the way to open-vocabulary semantic segmentation. Given an arbitrary set of …
have opened the way to open-vocabulary semantic segmentation. Given an arbitrary set of …
IPO: Interpretable Prompt Optimization for Vision-Language Models
Pre-trained vision-language models like CLIP have remarkably adapted to various
downstream tasks. Nonetheless, their performance heavily depends on the specificity of the …
downstream tasks. Nonetheless, their performance heavily depends on the specificity of the …
Prompt Diffusion Robustifies Any-Modality Prompt Learning
Foundation models enable prompt-based classifiers for zero-shot and few-shot learning.
Nonetheless, the conventional method of employing fixed prompts suffers from distributional …
Nonetheless, the conventional method of employing fixed prompts suffers from distributional …
Advances in Multimodal Adaptation and Generalization: From Traditional Approaches to Foundation Models
In real-world scenarios, achieving domain adaptation and generalization poses significant
challenges, as models must adapt to or generalize across unknown target distributions …
challenges, as models must adapt to or generalize across unknown target distributions …
DynaPrompt: Dynamic Test-Time Prompt Tuning
Test-time prompt tuning enhances zero-shot generalization of vision-language models but
tends to ignore the relatedness among test samples during inference. Online test-time …
tends to ignore the relatedness among test samples during inference. Online test-time …
Parameter-Efficient Fine-Tuning for Foundation Models
D Zhang, T Feng, L Xue, Y Wang, Y Dong… - arxiv preprint arxiv …, 2025 - arxiv.org
This survey delves into the realm of Parameter-Efficient Fine-Tuning (PEFT) within the
context of Foundation Models (FMs). PEFT, a cost-effective fine-tuning technique, minimizes …
context of Foundation Models (FMs). PEFT, a cost-effective fine-tuning technique, minimizes …
Technical note on calibrating vision-language models under covariate shift
Despite being a successful example of emerging capability, vision-language foundation
models for low-shot vision classification have a limited ability to sufficiently generalize to the …
models for low-shot vision classification have a limited ability to sufficiently generalize to the …
Complementary Subspace Low-Rank Adaptation of Vision-Language Models for Few-Shot Classification
Z Wang, J Dai, K Li, X Li, Y Guo, M **ang - arxiv preprint arxiv:2501.15040, 2025 - arxiv.org
Vision language model (VLM) has been designed for large scale image-text alignment as a
pretrained foundation model. For downstream few shot classification tasks, parameter …
pretrained foundation model. For downstream few shot classification tasks, parameter …