A Survey on Self-supervised Learning: Algorithms, Applications, and Future Trends
Deep supervised learning algorithms typically require a large volume of labeled data to
achieve satisfactory performance. However, the process of collecting and labeling such data …
achieve satisfactory performance. However, the process of collecting and labeling such data …
Audio self-supervised learning: A survey
Similar to humans' cognitive ability to generalize knowledge and skills, self-supervised
learning (SSL) targets discovering general representations from large-scale data. This …
learning (SSL) targets discovering general representations from large-scale data. This …
Masked autoencoders as spatiotemporal learners
This paper studies a conceptually simple extension of Masked Autoencoders (MAE) to
spatiotemporal representation learning from videos. We randomly mask out spacetime …
spatiotemporal representation learning from videos. We randomly mask out spacetime …
Vatt: Transformers for multimodal self-supervised learning from raw video, audio and text
We present a framework for learning multimodal representations from unlabeled data using
convolution-free Transformer architectures. Specifically, our Video-Audio-Text Transformer …
convolution-free Transformer architectures. Specifically, our Video-Audio-Text Transformer …
Siamese masked autoencoders
Establishing correspondence between images or scenes is a significant challenge in
computer vision, especially given occlusions, viewpoint changes, and varying object …
computer vision, especially given occlusions, viewpoint changes, and varying object …
Learning to exploit temporal structure for biomedical vision-language processing
Self-supervised learning in vision--language processing (VLP) exploits semantic alignment
between imaging and text modalities. Prior work in biomedical VLP has mostly relied on the …
between imaging and text modalities. Prior work in biomedical VLP has mostly relied on the …
Wav2clip: Learning robust audio representations from clip
We propose Wav2CLIP, a robust audio representation learning method by distilling from
Contrastive Language-Image Pre-training (CLIP). We systematically evaluate Wav2CLIP on …
Contrastive Language-Image Pre-training (CLIP). We systematically evaluate Wav2CLIP on …
Verbs in action: Improving verb understanding in video-language models
Understanding verbs is crucial to modelling how people and objects interact with each other
and the environment through space and time. Recently, state-of-the-art video-language …
and the environment through space and time. Recently, state-of-the-art video-language …
Bigssl: Exploring the frontier of large-scale semi-supervised learning for automatic speech recognition
We summarize the results of a host of efforts using giant automatic speech recognition (ASR)
models pre-trained using large, diverse unlabeled datasets containing approximately a …
models pre-trained using large, diverse unlabeled datasets containing approximately a …
Contrastive audio-visual masked autoencoder
In this paper, we first extend the recent Masked Auto-Encoder (MAE) model from a single
modality to audio-visual multi-modalities. Subsequently, we propose the Contrastive Audio …
modality to audio-visual multi-modalities. Subsequently, we propose the Contrastive Audio …