A survey on lora of large language models

Y Mao, Y Ge, Y Fan, W Xu, Y Mi, Z Hu… - Frontiers of Computer …, 2025 - Springer
Abstract Low-Rank Adaptation (LoRA), which updates the dense neural network layers with
pluggable low-rank matrices, is one of the best performed parameter efficient fine-tuning …

Seed: Accelerating reasoning tree construction via scheduled speculative decoding

Z Wang, J Wu, Y Lai, C Zhang, D Zhou - arxiv preprint arxiv:2406.18200, 2024 - arxiv.org
Large Language Models (LLMs) demonstrate remarkable emergent abilities across various
tasks, yet fall short of complex reasoning and planning tasks. The tree-search-based …

ClimaQA: An Automated Evaluation Framework for Climate Foundation Models

VV Manivannan, Y Jafari, S Eranky, S Ho, R Yu… - arxiv preprint arxiv …, 2024 - arxiv.org
The use of foundation models in climate science has recently gained significant attention.
However, a critical issue remains: the lack of a comprehensive evaluation framework …

A Comprehensive Evaluation of Parameter-Efficient Fine-Tuning on Method-Level Code Smell Detection

B Zhang, P Liang, X Zhou, X Zhou, D Lo… - arxiv preprint arxiv …, 2024 - arxiv.org
Code smells are suboptimal coding practices that negatively impact the quality of software
systems. Existing detection methods, relying on heuristics or Machine Learning (ML) and …

Parameter-Efficient Active Learning for Foundational models

AL Narayanan, R Krishnan, A Machireddy… - arxiv preprint arxiv …, 2024 - arxiv.org
Foundational vision transformer models have shown impressive few shot performance on
many vision tasks. This research presents a novel investigation into the application of …