Continual learning of natural language processing tasks: A survey

Z Ke, B Liu - arxiv preprint arxiv:2211.12701, 2022 - arxiv.org
Continual learning (CL) is a learning paradigm that emulates the human capability of
learning and accumulating knowledge continually without forgetting the previously learned …

Towards lifelong learning of large language models: A survey

J Zheng, S Qiu, C Shi, Q Ma - ACM Computing Surveys, 2024 - dl.acm.org
As the applications of large language models (LLMs) expand across diverse fields, their
ability to adapt to ongoing changes in data, tasks, and user preferences becomes crucial …

Recent advances of foundation language models-based continual learning: A survey

Y Yang, J Zhou, X Ding, T Huai, S Liu, Q Chen… - ACM Computing …, 2025 - dl.acm.org
Recently, foundation language models (LMs) have marked significant achievements in the
domains of natural language processing and computer vision. Unlike traditional neural …

Learn or recall? revisiting incremental learning with pre-trained language models

J Zheng, S Qiu, Q Ma - arxiv preprint arxiv:2312.07887, 2023 - arxiv.org
Incremental Learning (IL) has been a long-standing problem in both vision and Natural
Language Processing (NLP) communities. In recent years, as Pre-trained Language Models …

Prompts can play lottery tickets well: Achieving lifelong information extraction via lottery prompt tuning

Z Liang, F Wei, Y Jie, Y Qian, Z Hao… - Proceedings of the 61st …, 2023 - aclanthology.org
Thanks to the recent success of Pre-trained Language Models (PLMs), it has become a
promising research direction to develop a universal model (UIE) that can solve all typical …

Towards fewer hallucinations in knowledge-grounded dialogue generation via augmentative and contrastive knowledge-dialogue

B Sun, Y Li, F Mi, F Bie, Y Li, K Li - … of the 61st Annual Meeting of …, 2023 - aclanthology.org
Existing knowledge-grounded open-domain dialogue generation models often face the
hallucination problem, ie the dialogue generative model will persist in an inappropriate …

Semi-supervised lifelong language learning

Y Zhao, Y Zheng, B Yu, Z Tian, D Lee, J Sun… - arxiv preprint arxiv …, 2022 - arxiv.org
Lifelong learning aims to accumulate knowledge and alleviate catastrophic forgetting when
learning tasks sequentially. However, existing lifelong language learning methods only …

StructSP: Efficient fine-tuning of task-oriented dialog system by using structure-aware boosting and grammar constraints

T Do, P Nguyen, M Nguyen - Findings of the Association for …, 2023 - aclanthology.org
We have investigated methods utilizing hierarchical structure information representation in
the semantic parsing task and have devised a method that reinforces the semantic …

Dirichlet continual learning: tackling catastrophic forgetting in NLP

M Zeng, H Yang, W Xue, Q Liu, Y Guo - The 40th Conference on …, 2024 - openreview.net
Catastrophic forgetting poses a significant challenge in continual learning (CL). In the
context of Natural Language Processing, generative-based rehearsal CL methods have …

Continual learning with dirichlet generative-based rehearsal

M Zeng, W Xue, Q Liu, Y Guo - arxiv preprint arxiv:2309.06917, 2023 - arxiv.org
Recent advancements in data-driven task-oriented dialogue systems (ToDs) struggle with
incremental learning due to computational constraints and time-consuming issues …