A comprehensive survey of continual learning: theory, method and application

L Wang, X Zhang, H Su, J Zhu - IEEE Transactions on Pattern …, 2024 - ieeexplore.ieee.org
To cope with real-world dynamics, an intelligent system needs to incrementally acquire,
update, accumulate, and exploit knowledge throughout its lifetime. This ability, known as …

Parameter-efficient fine-tuning for large models: A comprehensive survey

Z Han, C Gao, J Liu, J Zhang, SQ Zhang - arxiv preprint arxiv:2403.14608, 2024 - arxiv.org
Large models represent a groundbreaking advancement in multiple application fields,
enabling remarkable achievements across various tasks. However, their unprecedented …

Blenderbot 3: a deployed conversational agent that continually learns to responsibly engage

K Shuster, J Xu, M Komeili, D Ju, EM Smith… - arxiv preprint arxiv …, 2022 - arxiv.org
We present BlenderBot 3, a 175B parameter dialogue model capable of open-domain
conversation with access to the internet and a long-term memory, and having been trained …

Achieving forgetting prevention and knowledge transfer in continual learning

Z Ke, B Liu, N Ma, H Xu, L Shu - Advances in Neural …, 2021 - proceedings.neurips.cc
Continual learning (CL) learns a sequence of tasks incrementally with the goal of achieving
two main objectives: overcoming catastrophic forgetting (CF) and encouraging knowledge …

Progressive prompts: Continual learning for language models

A Razdaibiedina, Y Mao, R Hou, M Khabsa… - arxiv preprint arxiv …, 2023 - arxiv.org
We introduce Progressive Prompts-a simple and efficient approach for continual learning in
language models. Our method allows forward transfer and resists catastrophic forgetting …

Fine-tuned language models are continual learners

T Scialom, T Chakrabarty, S Muresan - arxiv preprint arxiv:2205.12393, 2022 - arxiv.org
Recent work on large language models relies on the intuition that most natural language
processing tasks can be described via natural language instructions. Language models …

Continual learning of natural language processing tasks: A survey

Z Ke, B Liu - arxiv preprint arxiv:2211.12701, 2022 - arxiv.org
Continual learning (CL) is a learning paradigm that emulates the human capability of
learning and accumulating knowledge continually without forgetting the previously learned …

Mitigating the alignment tax of rlhf

Y Lin, H Lin, W **ong, S Diao, J Liu… - Proceedings of the …, 2024 - aclanthology.org
LLMs acquire a wide range of abilities during pre-training, but aligning LLMs under
Reinforcement Learning with Human Feedback (RLHF) can lead to forgetting pretrained …

Recent advances of foundation language models-based continual learning: A survey

Y Yang, J Zhou, X Ding, T Huai, S Liu, Q Chen… - ACM Computing …, 2025 - dl.acm.org
Recently, foundation language models (LMs) have marked significant achievements in the
domains of natural language processing and computer vision. Unlike traditional neural …

Continual prompt tuning for dialog state tracking

Q Zhu, B Li, F Mi, X Zhu, M Huang - arxiv preprint arxiv:2203.06654, 2022 - arxiv.org
A desirable dialog system should be able to continually learn new skills without forgetting
old ones, and thereby adapt to new domains or tasks in its life cycle. However, continually …