A survey on lora of large language models

Y Mao, Y Ge, Y Fan, W Xu, Y Mi, Z Hu… - Frontiers of Computer …, 2025 - Springer
Abstract Low-Rank Adaptation (LoRA), which updates the dense neural network layers with
pluggable low-rank matrices, is one of the best performed parameter efficient fine-tuning …

Ldadam: Adaptive optimization from low-dimensional gradient statistics

T Robert, M Safaryan, IV Modoranu… - arxiv preprint arxiv …, 2024 - arxiv.org
We introduce LDAdam, a memory-efficient optimizer for training large models, that performs
adaptive optimization steps within lower dimensional subspaces, while consistently …

Towards a science exocortex

KG Yager - Digital Discovery, 2024 - pubs.rsc.org
Artificial intelligence (AI) methods are poised to revolutionize intellectual work, with
generative AI enabling automation of text analysis, text generation, and simple decision …

On fairness of low-rank adaptation of large models

Z Ding, KZ Liu, P Peetathawatchai, B Isik… - arxiv preprint arxiv …, 2024 - arxiv.org
Low-rank adaptation of large models, particularly LoRA, has gained traction due to its
computational efficiency. This efficiency, contrasted with the prohibitive costs of full-model …

Pre-trained Audio Transformer as a Foundational AI Tool for Gravitational Waves

C Chatterjee, A Petulante, K Jani… - arxiv preprint arxiv …, 2024 - arxiv.org
As gravitational wave detectors become more advanced and sensitive, the number of
signals recorded by Advanced LIGO and Virgo from merging compact objects is expected to …

Learning Parameter Sharing with Tensor Decompositions and Sparsity

C Üyük, M Lasby, M Yassin, U Evci… - arxiv preprint arxiv …, 2024 - arxiv.org
Large neural networks achieve remarkable performance, but their size hinders deployment
on resource-constrained devices. While various compression techniques exist, parameter …

MiniMedGPT: Efficient Large Vision–Language Model for medical Visual Question Answering

AR Alsabbagh, T Mansour, M Al-Kharabsheh… - Pattern Recognition …, 2025 - Elsevier
Abstract While Large Vision–Language Models (LVLMs) like GPT-4 and Gemini
demonstrate significant potential, their utilization in the medical domain remains largely …

Sparse Gradient Compression for Fine-Tuning Large Language Models

DH Yang, MM Amiri, T Pedapati, S Chaudhury… - arxiv preprint arxiv …, 2025 - arxiv.org
Fine-tuning large language models (LLMs) for downstream tasks has become increasingly
crucial due to their widespread use and the growing availability of open-source models …

Shifting Attention to You: Personalized Brain-Inspired AI Models

SC Zhao, Y Hu, J Lee, A Bender, T Mazumdar… - arxiv preprint arxiv …, 2025 - arxiv.org
The integration of human and artificial intelligence represents a scientific opportunity to
advance our understanding of information processing, as each system offers unique …

RepLoRA: Reparameterizing Low-Rank Adaptation via the Perspective of Mixture of Experts

T Truong, C Nguyen, H Nguyen, M Le, T Le… - arxiv preprint arxiv …, 2025 - arxiv.org
Low-rank adaptation (LoRA) has emerged as a powerful method for fine-tuning large-scale
foundation models. Despite its popularity, the theoretical understanding of LoRA has …