Simgrace: A simple framework for graph contrastive learning without data augmentation

J **a, L Wu, J Chen, B Hu, SZ Li - … of the ACM Web Conference 2022, 2022 - dl.acm.org
Graph contrastive learning (GCL) has emerged as a dominant technique for graph
representation learning which maximizes the mutual information between paired graph …

Review–a survey of learning from noisy labels

X Liang, X Liu, L Yao - ECS Sensors Plus, 2022 - iopscience.iop.org
Deep Learning has achieved remarkable successes in many industry applications and
scientific research fields. One essential reason is that deep models can learn rich …

Disc: Learning from noisy labels via dynamic instance-specific selection and correction

Y Li, H Han, S Shan, X Chen - Proceedings of the IEEE/CVF …, 2023 - openaccess.thecvf.com
Existing studies indicate that deep neural networks (DNNs) can eventually memorize the
label noise. We observe that the memorization strength of DNNs towards each instance is …

Cvt-slr: Contrastive visual-textual transformation for sign language recognition with variational alignment

J Zheng, Y Wang, C Tan, S Li… - Proceedings of the …, 2023 - openaccess.thecvf.com
Sign language recognition (SLR) is a weakly supervised task that annotates sign videos as
textual glosses. Recent studies show that insufficient training caused by the lack of large …

Combating noisy labels with sample selection by mining high-discrepancy examples

X **a, B Han, Y Zhan, J Yu, M Gong… - Proceedings of the …, 2023 - openaccess.thecvf.com
The sample selection approach is popular in learning with noisy labels. The state-of-the-art
methods train two deep networks simultaneously for sample selection, which aims to employ …

Temporal attention unit: Towards efficient spatiotemporal predictive learning

C Tan, Z Gao, L Wu, Y Xu, J **a… - Proceedings of the …, 2023 - openaccess.thecvf.com
Spatiotemporal predictive learning aims to generate future frames by learning from historical
frames. In this paper, we investigate existing methods and present a general framework of …

Not all samples are born equal: Towards effective clean-label backdoor attacks

Y Gao, Y Li, L Zhu, D Wu, Y Jiang, ST **a - Pattern Recognition, 2023 - Elsevier
Recent studies demonstrated that deep neural networks (DNNs) are vulnerable to backdoor
attacks. The attacked model behaves normally on benign samples, while its predictions are …

Mole-bert: Rethinking pre-training graph neural networks for molecules

J **a, C Zhao, B Hu, Z Gao, C Tan, Y Liu, S Li, SZ Li - 2023 - chemrxiv.org
Recent years have witnessed the prosperity of pre-training graph neural networks (GNNs)
for molecules. Typically, atom types as node attributes are randomly masked and GNNs are …

Dealmvc: Dual contrastive calibration for multi-view clustering

X Yang, J Jiaqi, S Wang, K Liang, Y Liu, Y Wen… - Proceedings of the 31st …, 2023 - dl.acm.org
Benefiting from the strong view-consistent information mining capacity, multi-view
contrastive clustering has attracted plenty of attention in recent years. However, we observe …

Cs-isolate: Extracting hard confident examples by content and style isolation

Y Lin, Y Yao, X Shi, M Gong, X Shen… - Advances in Neural …, 2024 - proceedings.neurips.cc
Label noise widely exists in large-scale image datasets. To mitigate the side effects of label
noise, state-of-the-art methods focus on selecting confident examples by leveraging semi …