Self-supervised learning of graph neural networks: A unified review

Y **e, Z Xu, J Zhang, Z Wang… - IEEE transactions on …, 2022 - ieeexplore.ieee.org
Deep models trained in supervised mode have achieved remarkable success on a variety of
tasks. When labeled samples are limited, self-supervised learning (SSL) is emerging as a …

Self-supervised learning methods and applications in medical imaging analysis: A survey

S Shurrab, R Duwairi - PeerJ Computer Science, 2022 - peerj.com
The scarcity of high-quality annotated medical imaging datasets is a major problem that
collides with machine learning applications in the field of medical imaging analysis and …

Convnext v2: Co-designing and scaling convnets with masked autoencoders

S Woo, S Debnath, R Hu, X Chen… - Proceedings of the …, 2023 - openaccess.thecvf.com
Driven by improved architectures and better representation learning frameworks, the field of
visual recognition has enjoyed rapid modernization and performance boost in the early …

An empirical study of training self-supervised vision transformers

X Chen, S **e, K He - Proceedings of the IEEE/CVF …, 2021 - openaccess.thecvf.com
This paper does not describe a novel method. Instead, it studies a straightforward,
incremental, yet must-know baseline given the recent progress in computer vision: self …

ibot: Image bert pre-training with online tokenizer

J Zhou, C Wei, H Wang, W Shen, C **e, A Yuille… - arxiv preprint arxiv …, 2021 - arxiv.org
The success of language Transformers is primarily attributed to the pretext task of masked
language modeling (MLM), where texts are first tokenized into semantically meaningful …

Prompt, generate, then cache: Cascade of foundation models makes strong few-shot learners

R Zhang, X Hu, B Li, S Huang… - Proceedings of the …, 2023 - openaccess.thecvf.com
Visual recognition in low-data regimes requires deep neural networks to learn generalized
representations from limited training samples. Recently, CLIP-based methods have shown …

Towards out-of-distribution generalization: A survey

J Liu, Z Shen, Y He, X Zhang, R Xu, H Yu… - arxiv preprint arxiv …, 2021 - arxiv.org
Traditional machine learning paradigms are based on the assumption that both training and
test data follow the same statistical pattern, which is mathematically referred to as …

Diffusion autoencoders: Toward a meaningful and decodable representation

K Preechakul, N Chatthee… - Proceedings of the …, 2022 - openaccess.thecvf.com
Diffusion probabilistic models (DPMs) have achieved remarkable quality in image
generation that rivals GANs'. But unlike GANs, DPMs use a set of latent variables that lack …

Contrastive learning of medical visual representations from paired images and text

Y Zhang, H Jiang, Y Miura… - Machine Learning …, 2022 - proceedings.mlr.press
Learning visual representations of medical images (eg, X-rays) is core to medical image
understanding but its progress has been held back by the scarcity of human annotations …

Graph self-supervised learning: A survey

Y Liu, M **, S Pan, C Zhou, Y Zheng… - IEEE transactions on …, 2022 - ieeexplore.ieee.org
Deep learning on graphs has attracted significant interests recently. However, most of the
works have focused on (semi-) supervised learning, resulting in shortcomings including …