Self-supervised learning of graph neural networks: A unified review
Deep models trained in supervised mode have achieved remarkable success on a variety of
tasks. When labeled samples are limited, self-supervised learning (SSL) is emerging as a …
tasks. When labeled samples are limited, self-supervised learning (SSL) is emerging as a …
Self-supervised learning methods and applications in medical imaging analysis: A survey
The scarcity of high-quality annotated medical imaging datasets is a major problem that
collides with machine learning applications in the field of medical imaging analysis and …
collides with machine learning applications in the field of medical imaging analysis and …
Convnext v2: Co-designing and scaling convnets with masked autoencoders
Driven by improved architectures and better representation learning frameworks, the field of
visual recognition has enjoyed rapid modernization and performance boost in the early …
visual recognition has enjoyed rapid modernization and performance boost in the early …
An empirical study of training self-supervised vision transformers
This paper does not describe a novel method. Instead, it studies a straightforward,
incremental, yet must-know baseline given the recent progress in computer vision: self …
incremental, yet must-know baseline given the recent progress in computer vision: self …
ibot: Image bert pre-training with online tokenizer
The success of language Transformers is primarily attributed to the pretext task of masked
language modeling (MLM), where texts are first tokenized into semantically meaningful …
language modeling (MLM), where texts are first tokenized into semantically meaningful …
Prompt, generate, then cache: Cascade of foundation models makes strong few-shot learners
Visual recognition in low-data regimes requires deep neural networks to learn generalized
representations from limited training samples. Recently, CLIP-based methods have shown …
representations from limited training samples. Recently, CLIP-based methods have shown …
Towards out-of-distribution generalization: A survey
Traditional machine learning paradigms are based on the assumption that both training and
test data follow the same statistical pattern, which is mathematically referred to as …
test data follow the same statistical pattern, which is mathematically referred to as …
Diffusion autoencoders: Toward a meaningful and decodable representation
Diffusion probabilistic models (DPMs) have achieved remarkable quality in image
generation that rivals GANs'. But unlike GANs, DPMs use a set of latent variables that lack …
generation that rivals GANs'. But unlike GANs, DPMs use a set of latent variables that lack …
Contrastive learning of medical visual representations from paired images and text
Learning visual representations of medical images (eg, X-rays) is core to medical image
understanding but its progress has been held back by the scarcity of human annotations …
understanding but its progress has been held back by the scarcity of human annotations …
Graph self-supervised learning: A survey
Deep learning on graphs has attracted significant interests recently. However, most of the
works have focused on (semi-) supervised learning, resulting in shortcomings including …
works have focused on (semi-) supervised learning, resulting in shortcomings including …