A Survey on Self-supervised Learning: Algorithms, Applications, and Future Trends
Deep supervised learning algorithms typically require a large volume of labeled data to
achieve satisfactory performance. However, the process of collecting and labeling such data …
achieve satisfactory performance. However, the process of collecting and labeling such data …
Self-supervised representation learning: Introduction, advances, and challenges
Self-supervised representation learning (SSRL) methods aim to provide powerful, deep
feature learning without the requirement of large annotated data sets, thus alleviating the …
feature learning without the requirement of large annotated data sets, thus alleviating the …
Dinov2: Learning robust visual features without supervision
The recent breakthroughs in natural language processing for model pretraining on large
quantities of data have opened the way for similar foundation models in computer vision …
quantities of data have opened the way for similar foundation models in computer vision …
Towards a general-purpose foundation model for computational pathology
Quantitative evaluation of tissue images is crucial for computational pathology (CPath) tasks,
requiring the objective characterization of histopathological entities from whole-slide images …
requiring the objective characterization of histopathological entities from whole-slide images …
Visual prompting via image inpainting
How does one adapt a pre-trained visual model to novel downstream tasks without task-
specific finetuning or any model modification? Inspired by prompting in NLP, this paper …
specific finetuning or any model modification? Inspired by prompting in NLP, this paper …
Masked siamese networks for label-efficient learning
Abstract We propose Masked Siamese Networks (MSN), a self-supervised learning
framework for learning image representations. Our approach matches the representation of …
framework for learning image representations. Our approach matches the representation of …
Slip: Self-supervision meets language-image pre-training
Recent work has shown that self-supervised pre-training leads to improvements over
supervised learning on challenging visual recognition tasks. CLIP, an exciting new …
supervised learning on challenging visual recognition tasks. CLIP, an exciting new …
Pushing the limits of simple pipelines for few-shot learning: External data and fine-tuning make a difference
Few-shot learning (FSL) is an important and topical problem in computer vision that has
motivated extensive research into numerous methods spanning from sophisticated meta …
motivated extensive research into numerous methods spanning from sophisticated meta …
Unified contrastive learning in image-text-label space
Visual recognition is recently learned via either supervised learning on human-annotated
image-label data or language-image contrastive learning with webly-crawled image-text …
image-label data or language-image contrastive learning with webly-crawled image-text …
Contrast with reconstruct: Contrastive 3d representation learning guided by generative pretraining
Mainstream 3D representation learning approaches are built upon contrastive or generative
modeling pretext tasks, where great improvements in performance on various downstream …
modeling pretext tasks, where great improvements in performance on various downstream …