A comprehensive survey of continual learning: theory, method and application
To cope with real-world dynamics, an intelligent system needs to incrementally acquire,
update, accumulate, and exploit knowledge throughout its lifetime. This ability, known as …
update, accumulate, and exploit knowledge throughout its lifetime. This ability, known as …
A comprehensive survey of forgetting in deep learning beyond continual learning
Forgetting refers to the loss or deterioration of previously acquired knowledge. While
existing surveys on forgetting have primarily focused on continual learning, forgetting is a …
existing surveys on forgetting have primarily focused on continual learning, forgetting is a …
A survey on negative transfer
Transfer learning (TL) utilizes data or knowledge from one or more source domains to
facilitate learning in a target domain. It is particularly useful when the target domain has very …
facilitate learning in a target domain. It is particularly useful when the target domain has very …
On the stability-plasticity dilemma of class-incremental learning
A primary goal of class-incremental learning is to strike a balance between stability and
plasticity, where models should be both stable enough to retain knowledge learned from …
plasticity, where models should be both stable enough to retain knowledge learned from …
Wide neural networks forget less catastrophically
A primary focus area in continual learning research is alleviating the" catastrophic forgetting"
problem in neural networks by designing new algorithms that are more robust to the …
problem in neural networks by designing new algorithms that are more robust to the …
Continual learning in the teacher-student setup: Impact of task similarity
Continual learning {—} the ability to learn many tasks in sequence {—} is critical for artificial
learning systems. Yet standard training methods for deep networks often suffer from …
learning systems. Yet standard training methods for deep networks often suffer from …
Deep reinforcement and infomax learning
We posit that a reinforcement learning (RL) agent will perform better when it uses
representations that are better at predicting the future, particularly in terms of few-shot …
representations that are better at predicting the future, particularly in terms of few-shot …
Theory on forgetting and generalization of continual learning
Continual learning (CL), which aims to learn a sequence of tasks, has attracted significant
recent attention. However, most work has focused on the experimental performance of CL …
recent attention. However, most work has focused on the experimental performance of CL …
The ideal continual learner: An agent that never forgets
The goal of continual learning is to find a model that solves multiple learning tasks which are
presented sequentially to the learner. A key challenge in this setting is that the learner may" …
presented sequentially to the learner. A key challenge in this setting is that the learner may" …
How catastrophic can catastrophic forgetting be in linear regression?
To better understand catastrophic forgetting, we study fitting an overparameterized linear
model to a sequence of tasks with different input distributions. We analyze how much the …
model to a sequence of tasks with different input distributions. We analyze how much the …