Chain of lora: Efficient fine-tuning of language models via residual learning

W **a, C Qin, E Hazan - arxiv preprint arxiv:2401.04151, 2024 - arxiv.org
Fine-tuning is the primary methodology for tailoring pre-trained large language models to
specific tasks. As the model's scale and the diversity of tasks expand, parameter-efficient fine …

How to configure good in-context sequence for visual question answering

L Li, J Peng, H Chen, C Gao… - Proceedings of the IEEE …, 2024 - openaccess.thecvf.com
Inspired by the success of Large Language Models in dealing with new tasks via In-Context
Learning (ICL) in NLP researchers have also developed Large Vision-Language Models …

Lever LM: configuring in-context sequence to lever large vision language models

X Yang, Y Peng, H Ma, S Xu, C Zhang, Y Han… - arxiv preprint arxiv …, 2023 - arxiv.org
As Archimedes famously said,``Give me a lever long enough and a fulcrum on which to
place it, and I shall move the world'', in this study, we propose to use a tiny Language Model …

Lifelong Event Detection with Embedding Space Separation and Compaction

C Qin, R Chen, R Zhao, W **a, S Joty - arxiv preprint arxiv:2404.02507, 2024 - arxiv.org
To mitigate forgetting, existing lifelong event detection methods typically maintain a memory
module and replay the stored memory data during the learning of a new task. However, the …