Parameter-efficient fine-tuning for large models: A comprehensive survey

Z Han, C Gao, J Liu, J Zhang, SQ Zhang - ar** a new multimodal LLM (MLLM) by pre-training on tremendous image-text
pairs from scratch can be exceedingly resource-consuming, connecting an existing LLM with …

M6-rec: Generative pretrained language models are open-ended recommender systems

Z Cui, J Ma, C Zhou, J Zhou, H Yang - arxiv preprint arxiv:2205.08084, 2022 - arxiv.org
Industrial recommender systems have been growing increasingly complex, may
involve\emph {diverse domains} such as e-commerce products and user-generated …

Aging with grace: Lifelong model editing with discrete key-value adaptors

T Hartvigsen, S Sankaranarayanan… - Advances in …, 2024 - proceedings.neurips.cc
Deployed language models decay over time due to shifting inputs, changing user needs, or
emergent world-knowledge gaps. When such problems are identified, we want to make …

Multitask prompt tuning enables parameter-efficient transfer learning

Z Wang, R Panda, L Karlinsky, R Feris, H Sun… - arxiv preprint arxiv …, 2023 - arxiv.org
Prompt tuning, in which a base pretrained model is adapted to each task via conditioning on
learned prompt vectors, has emerged as a promising approach for efficiently adapting large …