A comprehensive study of class incremental learning algorithms for visual tasks
The ability of artificial agents to increment their capabilities when confronted with new data is
an open challenge in artificial intelligence. The main challenge faced in such cases is …
an open challenge in artificial intelligence. The main challenge faced in such cases is …
Adding conditional control to text-to-image diffusion models
We present ControlNet, a neural network architecture to add spatial conditioning controls to
large, pretrained text-to-image diffusion models. ControlNet locks the production-ready large …
large, pretrained text-to-image diffusion models. ControlNet locks the production-ready large …
[HTML][HTML] A survey on few-shot class-incremental learning
Large deep learning models are impressive, but they struggle when real-time data is not
available. Few-shot class-incremental learning (FSCIL) poses a significant challenge for …
available. Few-shot class-incremental learning (FSCIL) poses a significant challenge for …
Vision transformer adapter for dense predictions
This work investigates a simple yet powerful adapter for Vision Transformer (ViT). Unlike
recent visual transformers that introduce vision-specific inductive biases into their …
recent visual transformers that introduce vision-specific inductive biases into their …
Parameter-efficient transfer learning for NLP
Fine-tuning large pretrained models is an effective transfer mechanism in NLP. However, in
the presence of many downstream tasks, fine-tuning is parameter inefficient: an entire new …
the presence of many downstream tasks, fine-tuning is parameter inefficient: an entire new …
A continual learning survey: Defying forgetting in classification tasks
Artificial neural networks thrive in solving the classification problem for a particular rigid task,
acquiring knowledge through generalized learning behaviour from a distinct training phase …
acquiring knowledge through generalized learning behaviour from a distinct training phase …
Class-incremental learning: survey and performance evaluation on image classification
For future learning systems, incremental learning is desirable because it allows for: efficient
resource usage by eliminating the need to retrain from scratch at the arrival of new data; …
resource usage by eliminating the need to retrain from scratch at the arrival of new data; …
End-to-end multi-task learning with attention
We propose a novel multi-task learning architecture, which allows learning of task-specific
feature-level attention. Our design, the Multi-Task Attention Network (MTAN), consists of a …
feature-level attention. Our design, the Multi-Task Attention Network (MTAN), consists of a …
Learning multiple visual domains with residual adapters
There is a growing interest in learning data representations that work well for many different
types of problems and data. In this paper, we look in particular at the task of learning a single …
types of problems and data. In this paper, we look in particular at the task of learning a single …
Piggyback: Adapting a single network to multiple tasks by learning to mask weights
This work presents a method for adapting a single, fixed deep neural network to multiple
tasks without affecting performance on already learned tasks. By building upon ideas from …
tasks without affecting performance on already learned tasks. By building upon ideas from …