Deep multi-view learning methods: A review
Multi-view learning (MVL) has attracted increasing attention and achieved great practical
success by exploiting complementary information of multiple features or modalities …
success by exploiting complementary information of multiple features or modalities …
To compress or not to compress—self-supervised learning and information theory: A review
Deep neural networks excel in supervised learning tasks but are constrained by the need for
extensive labeled data. Self-supervised learning emerges as a promising alternative …
extensive labeled data. Self-supervised learning emerges as a promising alternative …
Dual contrastive prediction for incomplete multi-view representation learning
In this article, we propose a unified framework to solve the following two challenging
problems in incomplete multi-view representation learning: i) how to learn a consistent …
problems in incomplete multi-view representation learning: i) how to learn a consistent …
What makes multi-modal learning better than single (provably)
The world provides us with data of multiple modalities. Intuitively, models fusing data from
different modalities outperform their uni-modal counterparts, since more information is …
different modalities outperform their uni-modal counterparts, since more information is …
Simcvd: Simple contrastive voxel-wise representation distillation for semi-supervised medical image segmentation
Automated segmentation in medical image analysis is a challenging task that requires a
large amount of manually labeled data. However, most existing learning-based approaches …
large amount of manually labeled data. However, most existing learning-based approaches …
Shape-erased feature learning for visible-infrared person re-identification
Due to the modality gap between visible and infrared images with high visual ambiguity,
learning diverse modality-shared semantic concepts for visible-infrared person re …
learning diverse modality-shared semantic concepts for visible-infrared person re …
Semi-supervised and unsupervised deep visual learning: A survey
State-of-the-art deep learning models are often trained with a large amount of costly labeled
training data. However, requiring exhaustive manual annotations may degrade the model's …
training data. However, requiring exhaustive manual annotations may degrade the model's …
From canonical correlation analysis to self-supervised graph neural networks
We introduce a conceptually simple yet effective model for self-supervised representation
learning with graph data. It follows the previous methods that generate two views of an input …
learning with graph data. It follows the previous methods that generate two views of an input …
Exploiting domain-specific features to enhance domain generalization
Abstract Domain Generalization (DG) aims to train a model, from multiple observed source
domains, in order to perform well on unseen target domains. To obtain the generalization …
domains, in order to perform well on unseen target domains. To obtain the generalization …
Towards self-interpretable graph-level anomaly detection
Graph-level anomaly detection (GLAD) aims to identify graphs that exhibit notable
dissimilarity compared to the majority in a collection. However, current works primarily focus …
dissimilarity compared to the majority in a collection. However, current works primarily focus …