Simgrace: A simple framework for graph contrastive learning without data augmentation
Graph contrastive learning (GCL) has emerged as a dominant technique for graph
representation learning which maximizes the mutual information between paired graph …
representation learning which maximizes the mutual information between paired graph …
Review–a survey of learning from noisy labels
X Liang, X Liu, L Yao - ECS Sensors Plus, 2022 - iopscience.iop.org
Deep Learning has achieved remarkable successes in many industry applications and
scientific research fields. One essential reason is that deep models can learn rich …
scientific research fields. One essential reason is that deep models can learn rich …
Disc: Learning from noisy labels via dynamic instance-specific selection and correction
Existing studies indicate that deep neural networks (DNNs) can eventually memorize the
label noise. We observe that the memorization strength of DNNs towards each instance is …
label noise. We observe that the memorization strength of DNNs towards each instance is …
Cvt-slr: Contrastive visual-textual transformation for sign language recognition with variational alignment
Sign language recognition (SLR) is a weakly supervised task that annotates sign videos as
textual glosses. Recent studies show that insufficient training caused by the lack of large …
textual glosses. Recent studies show that insufficient training caused by the lack of large …
Combating noisy labels with sample selection by mining high-discrepancy examples
The sample selection approach is popular in learning with noisy labels. The state-of-the-art
methods train two deep networks simultaneously for sample selection, which aims to employ …
methods train two deep networks simultaneously for sample selection, which aims to employ …
Temporal attention unit: Towards efficient spatiotemporal predictive learning
Spatiotemporal predictive learning aims to generate future frames by learning from historical
frames. In this paper, we investigate existing methods and present a general framework of …
frames. In this paper, we investigate existing methods and present a general framework of …
Not all samples are born equal: Towards effective clean-label backdoor attacks
Recent studies demonstrated that deep neural networks (DNNs) are vulnerable to backdoor
attacks. The attacked model behaves normally on benign samples, while its predictions are …
attacks. The attacked model behaves normally on benign samples, while its predictions are …
Mole-bert: Rethinking pre-training graph neural networks for molecules
Recent years have witnessed the prosperity of pre-training graph neural networks (GNNs)
for molecules. Typically, atom types as node attributes are randomly masked and GNNs are …
for molecules. Typically, atom types as node attributes are randomly masked and GNNs are …
Dealmvc: Dual contrastive calibration for multi-view clustering
Benefiting from the strong view-consistent information mining capacity, multi-view
contrastive clustering has attracted plenty of attention in recent years. However, we observe …
contrastive clustering has attracted plenty of attention in recent years. However, we observe …
Cs-isolate: Extracting hard confident examples by content and style isolation
Label noise widely exists in large-scale image datasets. To mitigate the side effects of label
noise, state-of-the-art methods focus on selecting confident examples by leveraging semi …
noise, state-of-the-art methods focus on selecting confident examples by leveraging semi …