Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
A survey on lora of large language models
Y Mao, Y Ge, Y Fan, W Xu, Y Mi, Z Hu… - Frontiers of Computer …, 2025 - Springer
Abstract Low-Rank Adaptation (LoRA), which updates the dense neural network layers with
pluggable low-rank matrices, is one of the best performed parameter efficient fine-tuning …
pluggable low-rank matrices, is one of the best performed parameter efficient fine-tuning …
Federatedscope-llm: A comprehensive package for fine-tuning large language models in federated learning
Large language models (LLMs) have demonstrated great capabilities in various natural
language understanding and generation tasks. These pre-trained LLMs can be further …
language understanding and generation tasks. These pre-trained LLMs can be further …
Heterogeneous lora for federated fine-tuning of on-device foundation models
Foundation models (FMs) adapt well to specific domains or tasks with fine-tuning, and
federated learning (FL) enables the potential for privacy-preserving fine-tuning of the FMs …
federated learning (FL) enables the potential for privacy-preserving fine-tuning of the FMs …
Data-juicer: A one-stop data processing system for large language models
The immense evolution in Large Language Models (LLMs) has underscored the importance
of massive, heterogeneous, and high-quality data. A data recipe is a mixture of data from …
of massive, heterogeneous, and high-quality data. A data recipe is a mixture of data from …
Federated full-parameter tuning of billion-sized language models with communication cost under 18 kilobytes
Pre-trained large language models (LLMs) need fine-tuning to improve their responsiveness
to natural language instructions. Federated learning offers a way to fine-tune LLMs using the …
to natural language instructions. Federated learning offers a way to fine-tune LLMs using the …
[HTML][HTML] Knowledge-Empowered, Collaborative, and Co-Evolving AI Models: The Post-LLM Roadmap
Large language models (LLMs) have significantly advanced artificial intelligence (AI) by
excelling in tasks such as understanding, generation, and reasoning across multiple …
excelling in tasks such as understanding, generation, and reasoning across multiple …
On the convergence of zeroth-order federated tuning for large language models
The confluence of Federated Learning (FL) and Large Language Models (LLMs) is ushering
in a new era in privacy-preserving natural language processing. However, the intensive …
in a new era in privacy-preserving natural language processing. However, the intensive …
Federated lora with sparse communication
Low-rank adaptation (LoRA) is a natural method for finetuning in communication-
constrained machine learning settings such as cross-device federated learning. Prior work …
constrained machine learning settings such as cross-device federated learning. Prior work …
FDLoRA: personalized federated learning of large language model via dual LoRA tuning
Large language models (LLMs) have emerged as important components across various
fields, yet their training requires substantial computation resources and abundant labeled …
fields, yet their training requires substantial computation resources and abundant labeled …
The Synergy between Data and Multi-Modal Large Language Models: A Survey from Co-Development Perspective
The rapid development of large language models (LLMs) has been witnessed in recent
years. Based on the powerful LLMs, multi-modal LLMs (MLLMs) extend the modality from …
years. Based on the powerful LLMs, multi-modal LLMs (MLLMs) extend the modality from …