A survey on deep semi-supervised learning

X Yang, Z Song, I King, Z Xu - IEEE Transactions on …, 2022 - ieeexplore.ieee.org
Deep semi-supervised learning is a fast-growing field with a range of practical applications.
This paper provides a comprehensive survey on both fundamentals and recent advances in …

A comprehensive survey on test-time adaptation under distribution shifts

J Liang, R He, T Tan - International Journal of Computer Vision, 2024 - Springer
Abstract Machine learning methods strive to acquire a robust model during the training
process that can effectively generalize to test samples, even in the presence of distribution …

Self-training with noisy student improves imagenet classification

Q **e, MT Luong, E Hovy… - Proceedings of the IEEE …, 2020 - openaccess.thecvf.com
We present a simple self-training method that achieves 88.4% top-1 accuracy on ImageNet,
which is 2.0% better than the state-of-the-art model that requires 3.5 B weakly labeled …

Extract free dense labels from clip

C Zhou, CC Loy, B Dai - European Conference on Computer Vision, 2022 - Springer
Abstract Contrastive Language-Image Pre-training (CLIP) has made a remarkable
breakthrough in open-vocabulary zero-shot image recognition. Many recent studies …

In defense of pseudo-labeling: An uncertainty-aware pseudo-label selection framework for semi-supervised learning

MN Rizve, K Duarte, YS Rawat, M Shah - arxiv preprint arxiv:2101.06329, 2021 - arxiv.org
The recent research in semi-supervised learning (SSL) is mostly dominated by consistency
regularization based methods which achieve strong performance. However, they heavily …

Pseudo-labeling and confirmation bias in deep semi-supervised learning

E Arazo, D Ortego, P Albert… - … joint conference on …, 2020 - ieeexplore.ieee.org
Semi-supervised learning, ie jointly learning from labeled and unlabeled samples, is an
active research topic due to its key role on relaxing human supervision. In the context of …

Rethinking pre-training and self-training

B Zoph, G Ghiasi, TY Lin, Y Cui, H Liu… - Advances in neural …, 2020 - proceedings.neurips.cc
Pre-training is a dominant paradigm in computer vision. For example, supervised ImageNet
pre-training is commonly used to initialize the backbones of object detection and …

Hierarchical multi-scale attention for semantic segmentation

A Tao, K Sapra, B Catanzaro - arxiv preprint arxiv:2005.10821, 2020 - arxiv.org
Multi-scale inference is commonly used to improve the results of semantic segmentation.
Multiple images scales are passed through a network and then the results are combined …

Abdomenct-1k: Is abdominal organ segmentation a solved problem?

J Ma, Y Zhang, S Gu, C Zhu, C Ge… - … on Pattern Analysis …, 2021 - ieeexplore.ieee.org
With the unprecedented developments in deep learning, automatic segmentation of main
abdominal organs seems to be a solved problem as state-of-the-art (SOTA) methods have …

Tabtransformer: Tabular data modeling using contextual embeddings

X Huang, A Khetan, M Cvitkovic, Z Karnin - arxiv preprint arxiv …, 2020 - arxiv.org
We propose TabTransformer, a novel deep tabular data modeling architecture for
supervised and semi-supervised learning. The TabTransformer is built upon self-attention …