Each Rank Could be an Expert: Single-Ranked Mixture of Experts LoRA for Multi-Task Learning

Z Zhao, Y Zhou, D Zhu, T Shen, X Wang, J Su… - arxiv preprint arxiv …, 2025 - arxiv.org
Low-Rank Adaptation (LoRA) is widely used for adapting large language models (LLMs) to
specific domains due to its efficiency and modularity. Meanwhile, vanilla LoRA struggles …

An Adaptive Aggregation Method for Federated Learning via Meta Controller

T Shen, Z Li, Z Zhao, D Zhu, Z Lv, S Zhang… - Proceedings of the 6th …, 2024 - dl.acm.org
Federated learning (FL) emerged as a novel machine learning setting that enables
collaboratively training deep models on decentralized clients with privacy constraints. In …

Attack on LLMs: LoRA Once, Backdoor Everywhere in the Share-and-Play Ecosystem

H Liu, S Zhong, X Sun, M Tian, Z Liu, R Tang, J Yuan… - openreview.net
Finetuning large language models (LLMs) with LoRA has gained significant popularity due
to its simplicity and effectiveness. Often times, users may even find pluggable community …