A comprehensive survey of forgetting in deep learning beyond continual learning

Z Wang, E Yang, L Shen… - IEEE Transactions on …, 2024 - ieeexplore.ieee.org
Forgetting refers to the loss or deterioration of previously acquired knowledge. While
existing surveys on forgetting have primarily focused on continual learning, forgetting is a …

Onenet: Enhancing time series forecasting models under concept drift by online ensembling

Q Wen, W Chen, L Sun, Z Zhang… - Advances in …, 2023 - proceedings.neurips.cc
Online updating of time series forecasting models aims to address the concept drifting
problem by efficiently updating forecasting models based on streaming data. Many …

Transformers in biosignal analysis: A review

A Anwar, Y Khalifa, JL Coyle, E Sejdic - Information Fusion, 2024 - Elsevier
Transformer architectures have become increasingly popular in healthcare applications.
Through outstanding performance in natural language processing and superior capability to …

Adanpc: Exploring non-parametric classifier for test-time adaptation

Y Zhang, X Wang, K **, K Yuan… - International …, 2023 - proceedings.mlr.press
Many recent machine learning tasks focus to develop models that can generalize to unseen
distributions. Domain generalization (DG) has become one of the key topics in various fields …

[HTML][HTML] Large-scale multi-center CT and MRI segmentation of pancreas with deep learning

Z Zhang, E Keles, G Durak, Y Taktak, O Susladkar… - Medical image …, 2025 - Elsevier
Automated volumetric segmentation of the pancreas on cross-sectional imaging is needed
for diagnosis and follow-up of pancreatic diseases. While CT-based pancreatic …

Proxymix: Proxy-based mixup training with label refinery for source-free domain adaptation

Y Ding, L Sheng, J Liang, A Zheng, R He - Neural Networks, 2023 - Elsevier
Due to privacy concerns and data transmission issues, Source-free Unsupervised Domain
Adaptation (SFDA) has gained popularity. It exploits pre-trained source models, rather than …

Vida: Homeostatic visual domain adapter for continual test time adaptation

J Liu, S Yang, P Jia, R Zhang, M Lu, Y Guo… - arxiv preprint arxiv …, 2023 - arxiv.org
Since real-world machine systems are running in non-stationary environments, Continual
Test-Time Adaptation (CTTA) task is proposed to adapt the pre-trained model to continually …

On the test-time zero-shot generalization of vision-language models: Do we really need prompt learning?

M Zanella, I Ben Ayed - … of the IEEE/CVF Conference on …, 2024 - openaccess.thecvf.com
The development of large vision-language models notably CLIP has catalyzed research into
effective adaptation techniques with a particular focus on soft prompt tuning. Conjointly test …

Medadapter: Efficient test-time adaptation of large language models towards medical reasoning

W Shi, R Xu, Y Zhuang, Y Yu, H Sun, H Wu… - arxiv preprint arxiv …, 2024 - arxiv.org
Despite their improved capabilities in generation and reasoning, adapting large language
models (LLMs) to the biomedical domain remains challenging due to their immense size …

Adamerging: Adaptive model merging for multi-task learning

E Yang, Z Wang, L Shen, S Liu, G Guo, X Wang… - arxiv preprint arxiv …, 2023 - arxiv.org
Multi-task learning (MTL) aims to empower a model to tackle multiple tasks simultaneously.
A recent development known as task arithmetic has revealed that several models, each fine …