Learning from noisy labels with deep neural networks: A survey
Deep learning has achieved remarkable success in numerous domains with help from large
amounts of big data. However, the quality of data labels is a concern because of the lack of …
amounts of big data. However, the quality of data labels is a concern because of the lack of …
Shortcut learning in deep neural networks
Deep learning has triggered the current rise of artificial intelligence and is the workhorse of
today's machine intelligence. Numerous success stories have rapidly spread all over …
today's machine intelligence. Numerous success stories have rapidly spread all over …
Flexmatch: Boosting semi-supervised learning with curriculum pseudo labeling
The recently proposed FixMatch achieved state-of-the-art results on most semi-supervised
learning (SSL) benchmarks. However, like other modern SSL algorithms, FixMatch uses a …
learning (SSL) benchmarks. However, like other modern SSL algorithms, FixMatch uses a …
Bootstrap your own latent-a new approach to self-supervised learning
Abstract We introduce Bootstrap Your Own Latent (BYOL), a new approach to self-
supervised image representation learning. BYOL relies on two neural networks, referred to …
supervised image representation learning. BYOL relies on two neural networks, referred to …
Graph contrastive learning with augmentations
Generalizable, transferrable, and robust representation learning on graph-structured data
remains a challenge for current graph neural networks (GNNs). Unlike what has been …
remains a challenge for current graph neural networks (GNNs). Unlike what has been …
A simple framework for contrastive learning of visual representations
This paper presents SimCLR: a simple framework for contrastive learning of visual
representations. We simplify recently proposed contrastive self-supervised learning …
representations. We simplify recently proposed contrastive self-supervised learning …
Fixmatch: Simplifying semi-supervised learning with consistency and confidence
Semi-supervised learning (SSL) provides an effective means of leveraging unlabeled data
to improve a model's performance. This domain has seen fast progress recently, at the cost …
to improve a model's performance. This domain has seen fast progress recently, at the cost …
Semi-supervised semantic segmentation with cross pseudo supervision
In this paper, we study the semi-supervised semantic segmentation problem via exploring
both labeled data and extra unlabeled data. We propose a novel consistency regularization …
both labeled data and extra unlabeled data. We propose a novel consistency regularization …
Big self-supervised models are strong semi-supervised learners
One paradigm for learning from few labeled examples while making best use of a large
amount of unlabeled data is unsupervised pretraining followed by supervised fine-tuning …
amount of unlabeled data is unsupervised pretraining followed by supervised fine-tuning …
Vicreg: Variance-invariance-covariance regularization for self-supervised learning
Recent self-supervised methods for image representation learning are based on maximizing
the agreement between embedding vectors from different views of the same image. A trivial …
the agreement between embedding vectors from different views of the same image. A trivial …