Self-supervised learning in remote sensing: A review

Y Wang, CM Albrecht, NAA Braham… - IEEE Geoscience and …, 2022 - ieeexplore.ieee.org
In deep learning research, self-supervised learning (SSL) has received great attention,
triggering interest within both the computer vision and remote sensing communities. While …

Contrastive representation learning: A framework and review

PH Le-Khac, G Healy, AF Smeaton - Ieee Access, 2020 - ieeexplore.ieee.org
Contrastive Learning has recently received interest due to its success in self-supervised
representation learning in the computer vision domain. However, the origins of Contrastive …

Zero-shot text-guided object generation with dream fields

A Jain, B Mildenhall, JT Barron… - Proceedings of the …, 2022 - openaccess.thecvf.com
We combine neural rendering with multi-modal image and text representations to synthesize
diverse 3D objects solely from natural language descriptions. Our method, Dream Fields …

Interpretable and generalizable graph learning via stochastic attention mechanism

S Miao, M Liu, P Li - International Conference on Machine …, 2022 - proceedings.mlr.press
Interpretable graph learning is in need as many scientific applications depend on learning
models to collect insights from graph-structured data. Previous works mostly focused on …

Pre-training molecular graph representation with 3d geometry

S Liu, H Wang, W Liu, J Lasenby, H Guo… - arxiv preprint arxiv …, 2021 - arxiv.org
Molecular graph representation learning is a fundamental problem in modern drug and
material discovery. Molecular graphs are typically modeled by their 2D topological …

Graph contrastive learning with adaptive augmentation

Y Zhu, Y Xu, F Yu, Q Liu, S Wu, L Wang - Proceedings of the web …, 2021 - dl.acm.org
Recently, contrastive learning (CL) has emerged as a successful method for unsupervised
graph representation learning. Most graph CL methods first perform stochastic augmentation …

Contrastive learning for representation degeneration problem in sequential recommendation

R Qiu, Z Huang, H Yin, Z Wang - … conference on web search and data …, 2022 - dl.acm.org
Recent advancements of sequential deep learning models such as Transformer and BERT
have significantly facilitated the sequential recommendation. However, according to our …

Bootstrap your own latent-a new approach to self-supervised learning

JB Grill, F Strub, F Altché, C Tallec… - Advances in neural …, 2020 - proceedings.neurips.cc
Abstract We introduce Bootstrap Your Own Latent (BYOL), a new approach to self-
supervised image representation learning. BYOL relies on two neural networks, referred to …

Self-supervised learning with data augmentations provably isolates content from style

J Von Kügelgen, Y Sharma, L Gresele… - Advances in neural …, 2021 - proceedings.neurips.cc
Self-supervised representation learning has shown remarkable success in a number of
domains. A common practice is to perform data augmentation via hand-crafted …

Improving multimodal fusion with hierarchical mutual information maximization for multimodal sentiment analysis

W Han, H Chen, S Poria - arxiv preprint arxiv:2109.00412, 2021 - arxiv.org
In multimodal sentiment analysis (MSA), the performance of a model highly depends on the
quality of synthesized embeddings. These embeddings are generated from the upstream …