A survey on lora of large language models

Y Mao, Y Ge, Y Fan, W Xu, Y Mi, Z Hu… - Frontiers of Computer …, 2025 - Springer
Abstract Low-Rank Adaptation (LoRA), which updates the dense neural network layers with
pluggable low-rank matrices, is one of the best performed parameter efficient fine-tuning …

Transformers in source code generation: A comprehensive survey

H Ghaemi, Z Alizadehsani, A Shahraki… - Journal of Systems …, 2024 - Elsevier
Transformers have revolutionized natural language processing (NLP) and have had a huge
impact on automating tasks. Recently, transformers have led to the development of powerful …

Unveiling and harnessing hidden attention sinks: Enhancing large language models without training through attention calibration

Z Yu, Z Wang, Y Fu, H Shi, K Shaikh, YC Lin - arxiv preprint arxiv …, 2024 - arxiv.org
Attention is a fundamental component behind the remarkable achievements of large
language models (LLMs). However, our current understanding of the attention mechanism …

Mbias: Mitigating bias in large language models while retaining context

S Raza, A Raval, V Chatrath - arxiv preprint arxiv:2405.11290, 2024 - arxiv.org
The deployment of Large Language Models (LLMs) in diverse applications necessitates an
assurance of safety without compromising the contextual integrity of the generated content …

Enhancing Task Performance in Continual Instruction Fine-tuning Through Format Uniformity

X Tan, L Cheng, X Qiu, S Shi, Y Cheng, W Chu… - Proceedings of the 47th …, 2024 - dl.acm.org
In recent advancements, large language models (LLMs) have demonstrated remarkable
capabilities in diverse tasks, primarily through interactive question-answering with humans …

Pedagogical alignment of large language models (llm) for personalized learning: a survey, trends and challenges

MA Razafinirina, WG Dimbisoa, T Mahatody - Journal of Intelligent …, 2024 - scirp.org
This survey paper investigates how personalized learning offered by Large Language
Models (LLMs) could transform educational experiences. We explore Knowledge Editing …

Revisiting Benchmark and Assessment: An Agent-based Exploratory Dynamic Evaluation Framework for LLMs

W Wang, Z Ma, P Liu, M Chen - arxiv preprint arxiv:2410.11507, 2024 - arxiv.org
While various vertical domain large language models (LLMs) have been developed, the
challenge of automatically evaluating their performance across different domains remains …

[HTML][HTML] RESPECT: A framework for promoting inclusive and respectful conversations in online communications

S Raza, AY Muaad, E Hasan, M Garg… - Natural Language …, 2025 - Elsevier
Toxicity and bias in online conversations hinder respectful interactions, leading to issues
such as harassment and discrimination. While advancements in natural language …

GRL-Prompt: Towards Knowledge Graph based Prompt Optimization via Reinforcement Learning

Y Liu, T Liu, T Zhang, Y **a, J Wang, Z Shen… - arxiv preprint arxiv …, 2024 - arxiv.org
Large language models (LLMs) have demonstrated impressive success in a wide range of
natural language processing (NLP) tasks due to their extensive general knowledge of the …

[PDF][PDF] Low-Rank Adaptation for Scalable Fine-Tuning of Pre-Trained Language Models

H Dong, J Shun - 2025 - preprints.org
Low-Rank Adaptation (LoRA) is a computationally efficient approach for fine-tuning large
pre-trained language models, designed to reduce memory and computational overhead by …