A survey on deep semi-supervised learning
Deep semi-supervised learning is a fast-growing field with a range of practical applications.
This paper provides a comprehensive survey on both fundamentals and recent advances in …
This paper provides a comprehensive survey on both fundamentals and recent advances in …
A comprehensive survey on test-time adaptation under distribution shifts
Abstract Machine learning methods strive to acquire a robust model during the training
process that can effectively generalize to test samples, even in the presence of distribution …
process that can effectively generalize to test samples, even in the presence of distribution …
Self-training with noisy student improves imagenet classification
We present a simple self-training method that achieves 88.4% top-1 accuracy on ImageNet,
which is 2.0% better than the state-of-the-art model that requires 3.5 B weakly labeled …
which is 2.0% better than the state-of-the-art model that requires 3.5 B weakly labeled …
Extract free dense labels from clip
Abstract Contrastive Language-Image Pre-training (CLIP) has made a remarkable
breakthrough in open-vocabulary zero-shot image recognition. Many recent studies …
breakthrough in open-vocabulary zero-shot image recognition. Many recent studies …
In defense of pseudo-labeling: An uncertainty-aware pseudo-label selection framework for semi-supervised learning
The recent research in semi-supervised learning (SSL) is mostly dominated by consistency
regularization based methods which achieve strong performance. However, they heavily …
regularization based methods which achieve strong performance. However, they heavily …
Pseudo-labeling and confirmation bias in deep semi-supervised learning
Semi-supervised learning, ie jointly learning from labeled and unlabeled samples, is an
active research topic due to its key role on relaxing human supervision. In the context of …
active research topic due to its key role on relaxing human supervision. In the context of …
Rethinking pre-training and self-training
Pre-training is a dominant paradigm in computer vision. For example, supervised ImageNet
pre-training is commonly used to initialize the backbones of object detection and …
pre-training is commonly used to initialize the backbones of object detection and …
Hierarchical multi-scale attention for semantic segmentation
Multi-scale inference is commonly used to improve the results of semantic segmentation.
Multiple images scales are passed through a network and then the results are combined …
Multiple images scales are passed through a network and then the results are combined …
Abdomenct-1k: Is abdominal organ segmentation a solved problem?
With the unprecedented developments in deep learning, automatic segmentation of main
abdominal organs seems to be a solved problem as state-of-the-art (SOTA) methods have …
abdominal organs seems to be a solved problem as state-of-the-art (SOTA) methods have …
Tabtransformer: Tabular data modeling using contextual embeddings
We propose TabTransformer, a novel deep tabular data modeling architecture for
supervised and semi-supervised learning. The TabTransformer is built upon self-attention …
supervised and semi-supervised learning. The TabTransformer is built upon self-attention …