A comprehensive survey of continual learning: Theory, method and application
To cope with real-world dynamics, an intelligent system needs to incrementally acquire,
update, accumulate, and exploit knowledge throughout its lifetime. This ability, known as …
update, accumulate, and exploit knowledge throughout its lifetime. This ability, known as …
A comprehensive survey of forgetting in deep learning beyond continual learning
Forgetting refers to the loss or deterioration of previously acquired knowledge. While
existing surveys on forgetting have primarily focused on continual learning, forgetting is a …
existing surveys on forgetting have primarily focused on continual learning, forgetting is a …
[PDF][PDF] Deep class-incremental learning: A survey
Deep models, eg, CNNs and Vision Transformers, have achieved impressive achievements
in many vision tasks in the closed world. However, novel classes emerge from time to time in …
in many vision tasks in the closed world. However, novel classes emerge from time to time in …
Dualprompt: Complementary prompting for rehearsal-free continual learning
Continual learning aims to enable a single model to learn a sequence of tasks without
catastrophic forgetting. Top-performing methods usually require a rehearsal buffer to store …
catastrophic forgetting. Top-performing methods usually require a rehearsal buffer to store …
S-prompts learning with pre-trained transformers: An occam's razor for domain incremental learning
State-of-the-art deep neural networks are still struggling to address the catastrophic
forgetting problem in continual learning. In this paper, we propose one simple paradigm …
forgetting problem in continual learning. In this paper, we propose one simple paradigm …
Learning to prompt for continual learning
The mainstream paradigm behind continual learning has been to adapt the model
parameters to non-stationary data distributions, where catastrophic forgetting is the central …
parameters to non-stationary data distributions, where catastrophic forgetting is the central …
Online continual learning through mutual information maximization
This paper proposed a new online continual learning approach called OCM based on
mutual information (MI) maximization. It achieves two objectives that are critical in dealing …
mutual information (MI) maximization. It achieves two objectives that are critical in dealing …
Model adaptation: Historical contrastive learning for unsupervised domain adaptation without source data
Unsupervised domain adaptation aims to align a labeled source domain and an unlabeled
target domain, but it requires to access the source data which often raises concerns in data …
target domain, but it requires to access the source data which often raises concerns in data …
Pcr: Proxy-based contrastive replay for online class-incremental continual learning
Online class-incremental continual learning is a specific task of continual learning. It aims to
continuously learn new classes from data stream and the samples of data stream are seen …
continuously learn new classes from data stream and the samples of data stream are seen …
Computationally budgeted continual learning: What does matter?
A Prabhu, HA Al Kader Hammoud… - Proceedings of the …, 2023 - openaccess.thecvf.com
Continual Learning (CL) aims to sequentially train models on streams of incoming data that
vary in distribution by preserving previous knowledge while adapting to new data. Current …
vary in distribution by preserving previous knowledge while adapting to new data. Current …