A comprehensive survey of continual learning: theory, method and application

L Wang, X Zhang, H Su, J Zhu - IEEE Transactions on Pattern …, 2024 - ieeexplore.ieee.org
To cope with real-world dynamics, an intelligent system needs to incrementally acquire,
update, accumulate, and exploit knowledge throughout its lifetime. This ability, known as …

A comprehensive survey of forgetting in deep learning beyond continual learning

Z Wang, E Yang, L Shen… - IEEE Transactions on …, 2024 - ieeexplore.ieee.org
Forgetting refers to the loss or deterioration of previously acquired knowledge. While
existing surveys on forgetting have primarily focused on continual learning, forgetting is a …

Plex: Towards reliability using pretrained large model extensions

D Tran, J Liu, MW Dusenberry, D Phan… - arxiv preprint arxiv …, 2022 - arxiv.org
A recent trend in artificial intelligence is the use of pretrained models for language and
vision tasks, which have achieved extraordinary performance but also puzzling failures …

Make continual learning stronger via C-flat

A Bian, W Li, H Yuan, M Wang, Z Zhao… - Advances in …, 2025 - proceedings.neurips.cc
How to balance the learning'sensitivity-stability'upon new task training and memory
preserving is critical in CL to resolve catastrophic forgetting. Improving model generalization …

Pre-train your loss: Easy bayesian transfer learning with informative priors

R Shwartz-Ziv, M Goldblum, H Souri… - Advances in …, 2022 - proceedings.neurips.cc
Deep learning is increasingly moving towards a transfer learning paradigm whereby large
foundation models are fine-tuned on downstream tasks, starting from an initialization …

Benchmarking bayesian deep learning on diabetic retinopathy detection tasks

N Band, TGJ Rudner, Q Feng, A Filos, Z Nado… - arxiv preprint arxiv …, 2022 - arxiv.org
Bayesian deep learning seeks to equip deep neural networks with the ability to precisely
quantify their predictive uncertainty, and has promised to make deep learning more reliable …

Function-space regularization in neural networks: A probabilistic perspective

TGJ Rudner, S Kapoor, S Qiu… - … on Machine Learning, 2023 - proceedings.mlr.press
Parameter-space regularization in neural network optimization is a fundamental tool for
improving generalization. However, standard parameter-space regularization methods …

Tractable function-space variational inference in bayesian neural networks

TGJ Rudner, Z Chen, YW Teh… - Advances in Neural …, 2022 - proceedings.neurips.cc
Reliable predictive uncertainty estimation plays an important role in enabling the
deployment of neural networks to safety-critical settings. A popular approach for estimating …

Should we learn most likely functions or parameters?

S Qiu, TGJ Rudner, S Kapoor… - Advances in Neural …, 2024 - proceedings.neurips.cc
Standard regularized training procedures correspond to maximizing a posterior distribution
over parameters, known as maximum a posteriori (MAP) estimation. However, model …

A study of Bayesian neural network surrogates for Bayesian optimization

YL Li, TGJ Rudner, AG Wilson - arxiv preprint arxiv:2305.20028, 2023 - arxiv.org
Bayesian optimization is a highly efficient approach to optimizing objective functions which
are expensive to query. These objectives are typically represented by Gaussian process …