Ldadam: Adaptive optimization from low-dimensional gradient statistics

T Robert, M Safaryan, IV Modoranu… - arxiv preprint arxiv …, 2024 - arxiv.org
We introduce LDAdam, a memory-efficient optimizer for training large models, that performs
adaptive optimization steps within lower dimensional subspaces, while consistently …

Exact Aggregation for Federated and Efficient Fine-Tuning of Foundation Models

R Singhal, K Ponkshe, P Vepakomma - arxiv preprint arxiv:2410.09432, 2024 - arxiv.org
Low-Rank Adaptation (LoRA) is a popular technique for efficient fine-tuning of foundation
models. However, applying LoRA in federated learning environments, where data is …

Sparse Gradient Compression for Fine-Tuning Large Language Models

DH Yang, MM Amiri, T Pedapati, S Chaudhury… - arxiv preprint arxiv …, 2025 - arxiv.org
Fine-tuning large language models (LLMs) for downstream tasks has become increasingly
crucial due to their widespread use and the growing availability of open-source models …

RepLoRA: Reparameterizing Low-Rank Adaptation via the Perspective of Mixture of Experts

T Truong, C Nguyen, H Nguyen, M Le, T Le… - arxiv preprint arxiv …, 2025 - arxiv.org
Low-rank adaptation (LoRA) has emerged as a powerful method for fine-tuning large-scale
foundation models. Despite its popularity, the theoretical understanding of LoRA has …

An Augmented Backward-Corrected Projector Splitting Integrator for Dynamical Low-Rank Training

J Kusch, S Schotthöfer, A Walter - arxiv preprint arxiv:2502.03006, 2025 - arxiv.org
Layer factorization has emerged as a widely used technique for training memory-efficient
neural networks. However, layer factorization methods face several challenges, particularly …

Low-Rank Agent-Specific Adaptation (LoRASA) for Multi-Agent Policy Learning

B Zhang, A Kapoor, M Sun - arxiv preprint arxiv:2502.05573, 2025 - arxiv.org
Multi-agent reinforcement learning (MARL) often relies on\emph {parameter sharing (PS)} to
scale efficiently. However, purely shared policies can stifle each agent's unique …

LasQ: Largest Singular Components Fine-Tuning for LLMs with Quantization

X Zhao, B Lin, Y Song - … Conference on Natural Language Processing and …, 2024 - Springer
Large language models (LLMs) have demonstrated strong capabilities in various industries,
but as the model parameters increase, the computational cost of fine-tuning the entire model …