Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Parameter-efficient fine-tuning for pre-trained vision models: A survey
Large-scale pre-trained vision models (PVMs) have shown great potential for adaptability
across various downstream vision tasks. However, with state-of-the-art PVMs growing to …
across various downstream vision tasks. However, with state-of-the-art PVMs growing to …
V-PETL Bench: A Unified Visual Parameter-Efficient Transfer Learning Benchmark
Parameter-efficient transfer learning (PETL) methods show promise in adapting a pre-
trained model to various downstream tasks while training only a few parameters. In the …
trained model to various downstream tasks while training only a few parameters. In the …
Self-supervised visual preference alignment
This paper makes the first attempt towards unsupervised preference alignment in Vision-
Language Models (VLMs). We generate chosen and rejected responses with regard to the …
Language Models (VLMs). We generate chosen and rejected responses with regard to the …
Parameter-efficient fine-tuning in large models: A survey of methodologies
L Wang, S Chen, L Jiang, S Pan, R Cai, S Yang… - arxiv preprint arxiv …, 2024 - arxiv.org
The large models, as predicted by scaling raw forecasts, have made groundbreaking
progress in many fields, particularly in natural language generation tasks, where they have …
progress in many fields, particularly in natural language generation tasks, where they have …
Efficient Few-Shot Action Recognition via Multi-level Post-reasoning
The integration with CLIP (Contrastive Vision-Language Pre-training) has significantly
refreshed the accuracy leaderboard of FSAR (Few-Shot Action Recognition). However, the …
refreshed the accuracy leaderboard of FSAR (Few-Shot Action Recognition). However, the …
KARST: Multi-Kernel Kronecker Adaptation with Re-Scaling Transmission for Visual Classification
Fine-tuning pre-trained vision models for specific tasks is a common practice in computer
vision. However, this process becomes more expensive as models grow larger. Recently …
vision. However, this process becomes more expensive as models grow larger. Recently …
[HTML][HTML] Time Series Foundation Model for Improved Transformer Load Forecasting and Overload Detection
Y Hou, C Ma, X Li, Y Sun, H Yu, Z Fang - Energies, 2025 - mdpi.com
Simple load forecasting and overload prediction models, such as LSTM and XGBoost, are
unable to handle the increasing amount of data in power systems. Recently, various …
unable to handle the increasing amount of data in power systems. Recently, various …
MIST: Multi-Modal Interactive Side-Tuning for Efficient Referring Expression Comprehension
Referring expression comprehension (REC) is a vision-language task to locate a target
object in an image based on a language expression. Fully fine-tuning general-purpose pre …
object in an image based on a language expression. Fully fine-tuning general-purpose pre …
Parameter-Efficient Fine-Tuning for Foundation Models
D Zhang, T Feng, L Xue, Y Wang, Y Dong… - arxiv preprint arxiv …, 2025 - arxiv.org
This survey delves into the realm of Parameter-Efficient Fine-Tuning (PEFT) within the
context of Foundation Models (FMs). PEFT, a cost-effective fine-tuning technique, minimizes …
context of Foundation Models (FMs). PEFT, a cost-effective fine-tuning technique, minimizes …
Token Adaptation via Side Graph Convolution for Temporally and Spatially Efficient Fine-tuning of 3D Point Cloud Transformers
T Furuya - arxiv preprint arxiv:2502.14142, 2025 - arxiv.org
Parameter-efficient fine-tuning (PEFT) of pre-trained 3D point cloud Transformers has
emerged as a promising technique for 3D point cloud analysis. While existing PEFT …
emerged as a promising technique for 3D point cloud analysis. While existing PEFT …