A comprehensive survey of continual learning: Theory, method and application

L Wang, X Zhang, H Su, J Zhu - IEEE Transactions on Pattern …, 2024 - ieeexplore.ieee.org
To cope with real-world dynamics, an intelligent system needs to incrementally acquire,
update, accumulate, and exploit knowledge throughout its lifetime. This ability, known as …

A comprehensive survey of forgetting in deep learning beyond continual learning

Z Wang, E Yang, L Shen… - IEEE Transactions on …, 2024 - ieeexplore.ieee.org
Forgetting refers to the loss or deterioration of previously acquired knowledge. While
existing surveys on forgetting have primarily focused on continual learning, forgetting is a …

[PDF][PDF] Deep class-incremental learning: A survey

DW Zhou, QW Wang, ZH Qi, HJ Ye… - arxiv preprint arxiv …, 2023 - researchgate.net
Deep models, eg, CNNs and Vision Transformers, have achieved impressive achievements
in many vision tasks in the closed world. However, novel classes emerge from time to time in …

Dualprompt: Complementary prompting for rehearsal-free continual learning

Z Wang, Z Zhang, S Ebrahimi, R Sun, H Zhang… - European conference on …, 2022 - Springer
Continual learning aims to enable a single model to learn a sequence of tasks without
catastrophic forgetting. Top-performing methods usually require a rehearsal buffer to store …

S-prompts learning with pre-trained transformers: An occam's razor for domain incremental learning

Y Wang, Z Huang, X Hong - Advances in Neural …, 2022 - proceedings.neurips.cc
State-of-the-art deep neural networks are still struggling to address the catastrophic
forgetting problem in continual learning. In this paper, we propose one simple paradigm …

Learning to prompt for continual learning

Z Wang, Z Zhang, CY Lee, H Zhang… - Proceedings of the …, 2022 - openaccess.thecvf.com
The mainstream paradigm behind continual learning has been to adapt the model
parameters to non-stationary data distributions, where catastrophic forgetting is the central …

Online continual learning through mutual information maximization

Y Guo, B Liu, D Zhao - International conference on machine …, 2022 - proceedings.mlr.press
This paper proposed a new online continual learning approach called OCM based on
mutual information (MI) maximization. It achieves two objectives that are critical in dealing …

Model adaptation: Historical contrastive learning for unsupervised domain adaptation without source data

J Huang, D Guan, A **ao, S Lu - Advances in neural …, 2021 - proceedings.neurips.cc
Unsupervised domain adaptation aims to align a labeled source domain and an unlabeled
target domain, but it requires to access the source data which often raises concerns in data …

Pcr: Proxy-based contrastive replay for online class-incremental continual learning

H Lin, B Zhang, S Feng, X Li… - Proceedings of the IEEE …, 2023 - openaccess.thecvf.com
Online class-incremental continual learning is a specific task of continual learning. It aims to
continuously learn new classes from data stream and the samples of data stream are seen …

Computationally budgeted continual learning: What does matter?

A Prabhu, HA Al Kader Hammoud… - Proceedings of the …, 2023 - openaccess.thecvf.com
Continual Learning (CL) aims to sequentially train models on streams of incoming data that
vary in distribution by preserving previous knowledge while adapting to new data. Current …