Deep multi-view learning methods: A review

X Yan, S Hu, Y Mao, Y Ye, H Yu - Neurocomputing, 2021 - Elsevier
Multi-view learning (MVL) has attracted increasing attention and achieved great practical
success by exploiting complementary information of multiple features or modalities …

To compress or not to compress—self-supervised learning and information theory: A review

R Shwartz Ziv, Y LeCun - Entropy, 2024 - mdpi.com
Deep neural networks excel in supervised learning tasks but are constrained by the need for
extensive labeled data. Self-supervised learning emerges as a promising alternative …

Dual contrastive prediction for incomplete multi-view representation learning

Y Lin, Y Gou, X Liu, J Bai, J Lv… - IEEE Transactions on …, 2022 - ieeexplore.ieee.org
In this article, we propose a unified framework to solve the following two challenging
problems in incomplete multi-view representation learning: i) how to learn a consistent …

What makes multi-modal learning better than single (provably)

Y Huang, C Du, Z Xue, X Chen… - Advances in Neural …, 2021 - proceedings.neurips.cc
The world provides us with data of multiple modalities. Intuitively, models fusing data from
different modalities outperform their uni-modal counterparts, since more information is …

Simcvd: Simple contrastive voxel-wise representation distillation for semi-supervised medical image segmentation

C You, Y Zhou, R Zhao, L Staib… - IEEE Transactions on …, 2022 - ieeexplore.ieee.org
Automated segmentation in medical image analysis is a challenging task that requires a
large amount of manually labeled data. However, most existing learning-based approaches …

Shape-erased feature learning for visible-infrared person re-identification

J Feng, A Wu, WS Zheng - … of the IEEE/CVF Conference on …, 2023 - openaccess.thecvf.com
Due to the modality gap between visible and infrared images with high visual ambiguity,
learning diverse modality-shared semantic concepts for visible-infrared person re …

Semi-supervised and unsupervised deep visual learning: A survey

Y Chen, M Mancini, X Zhu… - IEEE transactions on …, 2022 - ieeexplore.ieee.org
State-of-the-art deep learning models are often trained with a large amount of costly labeled
training data. However, requiring exhaustive manual annotations may degrade the model's …

From canonical correlation analysis to self-supervised graph neural networks

H Zhang, Q Wu, J Yan, D Wipf… - Advances in Neural …, 2021 - proceedings.neurips.cc
We introduce a conceptually simple yet effective model for self-supervised representation
learning with graph data. It follows the previous methods that generate two views of an input …

Exploiting domain-specific features to enhance domain generalization

MH Bui, T Tran, A Tran… - Advances in Neural …, 2021 - proceedings.neurips.cc
Abstract Domain Generalization (DG) aims to train a model, from multiple observed source
domains, in order to perform well on unseen target domains. To obtain the generalization …

Towards self-interpretable graph-level anomaly detection

Y Liu, K Ding, Q Lu, F Li… - Advances in Neural …, 2024 - proceedings.neurips.cc
Graph-level anomaly detection (GLAD) aims to identify graphs that exhibit notable
dissimilarity compared to the majority in a collection. However, current works primarily focus …