Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Parameter-efficient fine-tuning for large models: A comprehensive survey
Large models represent a groundbreaking advancement in multiple application fields,
enabling remarkable achievements across various tasks. However, their unprecedented …
enabling remarkable achievements across various tasks. However, their unprecedented …
Parameter-efficient fine-tuning methods for pretrained language models: A critical review and assessment
With the continuous growth in the number of parameters of transformer-based pretrained
language models (PLMs), particularly the emergence of large language models (LLMs) with …
language models (PLMs), particularly the emergence of large language models (LLMs) with …
Dynamic adapter meets prompt tuning: Parameter-efficient transfer learning for point cloud analysis
Point cloud analysis has achieved outstanding performance by transferring point cloud pre-
trained models. However existing methods for model adaptation usually update all model …
trained models. However existing methods for model adaptation usually update all model …
Owl: A large language model for it operations
With the rapid development of IT operations, it has become increasingly crucial to efficiently
manage and analyze large volumes of data for practical applications. The techniques of …
manage and analyze large volumes of data for practical applications. The techniques of …
Continual prompt tuning for dialog state tracking
A desirable dialog system should be able to continually learn new skills without forgetting
old ones, and thereby adapt to new domains or tasks in its life cycle. However, continually …
old ones, and thereby adapt to new domains or tasks in its life cycle. However, continually …
Exploring efficient-tuning methods in self-supervised speech models
In this study, we aim to explore efficient tuning methods for speech self-supervised learning.
Recent studies show that self-supervised learning (SSL) can learn powerful representations …
Recent studies show that self-supervised learning (SSL) can learn powerful representations …
Survey of different large language model architectures: Trends, benchmarks, and challenges
Large Language Models (LLMs) represent a class of deep learning models adept at
understanding natural language and generating coherent responses to various prompts or …
understanding natural language and generating coherent responses to various prompts or …
Group-aware parameter-efficient updating for content-adaptive neural video compression
Content-adaptive compression is crucial for enhancing the adaptability of the pre-trained
neural codec for various contents. Though, its application in neural video compression …
neural codec for various contents. Though, its application in neural video compression …
Dora: Enhancing parameter-efficient fine-tuning with dynamic rank distribution
Fine-tuning large-scale pre-trained models is inherently a resource-intensive task. While it
can enhance the capabilities of the model, it also incurs substantial computational costs …
can enhance the capabilities of the model, it also incurs substantial computational costs …
Causes and cures for interference in multilingual translation
Multilingual machine translation models can benefit from synergy between different
language pairs, but also suffer from interference. While there is a growing number of …
language pairs, but also suffer from interference. While there is a growing number of …