Self-supervised learning for time series analysis: Taxonomy, progress, and prospects
Self-supervised learning (SSL) has recently achieved impressive performance on various
time series tasks. The most prominent advantage of SSL is that it reduces the dependence …
time series tasks. The most prominent advantage of SSL is that it reduces the dependence …
Automated medical coding on MIMIC-III and MIMIC-IV: a critical review and replicability study
Medical coding is the task of assigning medical codes to clinical free-text documentation.
Healthcare professionals manually assign such codes to track patient diagnoses and …
Healthcare professionals manually assign such codes to track patient diagnoses and …
Mavil: Masked audio-video learners
Abstract We present Masked Audio-Video Learners (MAViL) to learn audio-visual
representations with three complementary forms of self-supervision:(1) reconstructing …
representations with three complementary forms of self-supervision:(1) reconstructing …
Comparative layer-wise analysis of self-supervised speech models
Many self-supervised speech models, varying in their pre-training objective, input modality,
and pre-training data, have been proposed in the last few years. Despite impressive …
and pre-training data, have been proposed in the last few years. Despite impressive …
Ml-superb: Multilingual speech universal performance benchmark
Speech processing Universal PERformance Benchmark (SUPERB) is a leaderboard to
benchmark the performance of Self-Supervised Learning (SSL) models on various speech …
benchmark the performance of Self-Supervised Learning (SSL) models on various speech …
Exploration of efficient end-to-end asr using discretized input from self-supervised learning
Self-supervised learning (SSL) of speech has shown impressive results in speech-related
tasks, particularly in automatic speech recognition (ASR). While most methods employ the …
tasks, particularly in automatic speech recognition (ASR). While most methods employ the …
[PDF][PDF] A path towards autonomous machine intelligence version 0.9. 2, 2022-06-27
Y LeCun - Open Review, 2022 - openreview.net
How could machines learn as efficiently as humans and animals? How could machines
learn to reason and plan? How could machines learn representations of percepts and action …
learn to reason and plan? How could machines learn representations of percepts and action …
Dphubert: Joint distillation and pruning of self-supervised speech models
Self-supervised learning (SSL) has achieved notable success in many speech processing
tasks, but the large model size and heavy computational cost hinder the deployment …
tasks, but the large model size and heavy computational cost hinder the deployment …
Dinosr: Self-distillation and online clustering for self-supervised speech representation learning
In this paper, we introduce self-distillation and online clustering for self-supervised speech
representation learning (DinoSR) which combines masked language modeling, self …
representation learning (DinoSR) which combines masked language modeling, self …
Reproducing whisper-style training using an open-source toolkit and publicly available data
Pre-training speech models on large volumes of data has achieved remarkable success.
OpenAI Whisper is a multilingual multitask model trained on 680k hours of supervised …
OpenAI Whisper is a multilingual multitask model trained on 680k hours of supervised …