Self-supervised learning in remote sensing: A review
Y Wang, CM Albrecht, NAA Braham… - IEEE Geoscience and …, 2022 - ieeexplore.ieee.org
In deep learning research, self-supervised learning (SSL) has received great attention,
triggering interest within both the computer vision and remote sensing communities. While …
triggering interest within both the computer vision and remote sensing communities. While …
Contrastive representation learning: A framework and review
Contrastive Learning has recently received interest due to its success in self-supervised
representation learning in the computer vision domain. However, the origins of Contrastive …
representation learning in the computer vision domain. However, the origins of Contrastive …
Zero-shot text-guided object generation with dream fields
We combine neural rendering with multi-modal image and text representations to synthesize
diverse 3D objects solely from natural language descriptions. Our method, Dream Fields …
diverse 3D objects solely from natural language descriptions. Our method, Dream Fields …
Interpretable and generalizable graph learning via stochastic attention mechanism
Interpretable graph learning is in need as many scientific applications depend on learning
models to collect insights from graph-structured data. Previous works mostly focused on …
models to collect insights from graph-structured data. Previous works mostly focused on …
Pre-training molecular graph representation with 3d geometry
Molecular graph representation learning is a fundamental problem in modern drug and
material discovery. Molecular graphs are typically modeled by their 2D topological …
material discovery. Molecular graphs are typically modeled by their 2D topological …
Graph contrastive learning with adaptive augmentation
Recently, contrastive learning (CL) has emerged as a successful method for unsupervised
graph representation learning. Most graph CL methods first perform stochastic augmentation …
graph representation learning. Most graph CL methods first perform stochastic augmentation …
Contrastive learning for representation degeneration problem in sequential recommendation
Recent advancements of sequential deep learning models such as Transformer and BERT
have significantly facilitated the sequential recommendation. However, according to our …
have significantly facilitated the sequential recommendation. However, according to our …
Bootstrap your own latent-a new approach to self-supervised learning
Abstract We introduce Bootstrap Your Own Latent (BYOL), a new approach to self-
supervised image representation learning. BYOL relies on two neural networks, referred to …
supervised image representation learning. BYOL relies on two neural networks, referred to …
Self-supervised learning with data augmentations provably isolates content from style
Self-supervised representation learning has shown remarkable success in a number of
domains. A common practice is to perform data augmentation via hand-crafted …
domains. A common practice is to perform data augmentation via hand-crafted …
Improving multimodal fusion with hierarchical mutual information maximization for multimodal sentiment analysis
In multimodal sentiment analysis (MSA), the performance of a model highly depends on the
quality of synthesized embeddings. These embeddings are generated from the upstream …
quality of synthesized embeddings. These embeddings are generated from the upstream …