Consistent prompting for rehearsal-free continual learning

Z Gao, J Cen, X Chang - … of the IEEE/CVF Conference on …, 2024 - openaccess.thecvf.com
Continual learning empowers models to adapt autonomously to the ever-changing
environment or data streams without forgetting old knowledge. Prompt-based approaches …

Continuous transfer of neural network representational similarity for incremental learning

S Tian, W Li, X Ning, H Ran, H Qin, P Tiwari - Neurocomputing, 2023 - Elsevier
The incremental learning paradigm in machine learning has consistently been a focus of
academic research. It is similar to the way in which biological systems learn, and reduces …

Data augmented flatness-aware gradient projection for continual learning

E Yang, L Shen, Z Wang, S Liu… - Proceedings of the …, 2023 - openaccess.thecvf.com
The goal of continual learning (CL) is to continuously learn new tasks without forgetting
previously learned old tasks. To alleviate catastrophic forgetting, gradient projection based …

Revisiting Flatness-aware Optimization in Continual Learning with Orthogonal Gradient Projection

E Yang, L Shen, Z Wang, S Liu, G Guo… - … on Pattern Analysis …, 2025 - ieeexplore.ieee.org
The goal of continual learning (CL) is to learn from a series of continuously arriving new
tasks without forgetting previously learned old tasks. To avoid catastrophic forgetting of old …

Improving generalization with approximate factored value functions

S Sodhani, S Levine, A Zhang - Transactions on Machine Learning …, 2022 - openreview.net
Reinforcement learning in general unstructured MDPs presents a challenging learning
problem. However, certain MDP structures, such as factorization, are known to simplify the …

Backward compatibility during data updates by weight interpolation

R Schumann, E Mansimov, YA Lai, N Pappas… - arxiv preprint arxiv …, 2023 - arxiv.org
Backward compatibility of model predictions is a desired property when updating a machine
learning driven application. It allows to seamlessly improve the underlying model without …

Primal-dual continual learning: Stability and plasticity through lagrange multipliers

J Elenter, N NaderiAlizadeh, T Javidi, A Ribeiro - 2023 - openreview.net
Continual learning is inherently a constrained learning problem. The goal is to learn a
predictor under a no-forgetting requirement. Although several prior studies formulate it as …

Generate to discriminate: Expert routing for continual learning

Y Byun, SV Mehta, S Garg, E Strubell, M Oberst… - arxiv preprint arxiv …, 2024 - arxiv.org
In many real-world settings, regulations and economic incentives permit the sharing of
models but not data across institutional boundaries. In such scenarios, practitioners might …

Sample Weight Estimation Using Meta-Updates for Online Continual Learning

H Hemati, D Borth - arxiv preprint arxiv:2401.15973, 2024 - arxiv.org
The loss function plays an important role in optimizing the performance of a learning system.
A crucial aspect of the loss function is the assignment of sample weights within a mini-batch …

Model Successor Functions

Y Chang, Y Bisk - arxiv preprint arxiv:2502.00197, 2025 - arxiv.org
The notion of generalization has moved away from the classical one defined in statistical
learning theory towards an emphasis on out-of-domain generalization (OODG). Recently …