A comprehensive survey of continual learning: theory, method and application
To cope with real-world dynamics, an intelligent system needs to incrementally acquire,
update, accumulate, and exploit knowledge throughout its lifetime. This ability, known as …
update, accumulate, and exploit knowledge throughout its lifetime. This ability, known as …
Parameter-efficient fine-tuning for large models: A comprehensive survey
Large models represent a groundbreaking advancement in multiple application fields,
enabling remarkable achievements across various tasks. However, their unprecedented …
enabling remarkable achievements across various tasks. However, their unprecedented …
Blenderbot 3: a deployed conversational agent that continually learns to responsibly engage
We present BlenderBot 3, a 175B parameter dialogue model capable of open-domain
conversation with access to the internet and a long-term memory, and having been trained …
conversation with access to the internet and a long-term memory, and having been trained …
Achieving forgetting prevention and knowledge transfer in continual learning
Continual learning (CL) learns a sequence of tasks incrementally with the goal of achieving
two main objectives: overcoming catastrophic forgetting (CF) and encouraging knowledge …
two main objectives: overcoming catastrophic forgetting (CF) and encouraging knowledge …
Progressive prompts: Continual learning for language models
We introduce Progressive Prompts-a simple and efficient approach for continual learning in
language models. Our method allows forward transfer and resists catastrophic forgetting …
language models. Our method allows forward transfer and resists catastrophic forgetting …
Fine-tuned language models are continual learners
Recent work on large language models relies on the intuition that most natural language
processing tasks can be described via natural language instructions. Language models …
processing tasks can be described via natural language instructions. Language models …
Continual learning of natural language processing tasks: A survey
Continual learning (CL) is a learning paradigm that emulates the human capability of
learning and accumulating knowledge continually without forgetting the previously learned …
learning and accumulating knowledge continually without forgetting the previously learned …
Mitigating the alignment tax of rlhf
LLMs acquire a wide range of abilities during pre-training, but aligning LLMs under
Reinforcement Learning with Human Feedback (RLHF) can lead to forgetting pretrained …
Reinforcement Learning with Human Feedback (RLHF) can lead to forgetting pretrained …
Recent advances of foundation language models-based continual learning: A survey
Recently, foundation language models (LMs) have marked significant achievements in the
domains of natural language processing and computer vision. Unlike traditional neural …
domains of natural language processing and computer vision. Unlike traditional neural …
Continual prompt tuning for dialog state tracking
A desirable dialog system should be able to continually learn new skills without forgetting
old ones, and thereby adapt to new domains or tasks in its life cycle. However, continually …
old ones, and thereby adapt to new domains or tasks in its life cycle. However, continually …