Each Rank Could be an Expert: Single-Ranked Mixture of Experts LoRA for Multi-Task Learning
Low-Rank Adaptation (LoRA) is widely used for adapting large language models (LLMs) to
specific domains due to its efficiency and modularity. Meanwhile, vanilla LoRA struggles …
specific domains due to its efficiency and modularity. Meanwhile, vanilla LoRA struggles …
An Adaptive Aggregation Method for Federated Learning via Meta Controller
Federated learning (FL) emerged as a novel machine learning setting that enables
collaboratively training deep models on decentralized clients with privacy constraints. In …
collaboratively training deep models on decentralized clients with privacy constraints. In …
Attack on LLMs: LoRA Once, Backdoor Everywhere in the Share-and-Play Ecosystem
Finetuning large language models (LLMs) with LoRA has gained significant popularity due
to its simplicity and effectiveness. Often times, users may even find pluggable community …
to its simplicity and effectiveness. Often times, users may even find pluggable community …