Self-supervised speech representation learning: A review

A Mohamed, H Lee, L Borgholt… - IEEE Journal of …, 2022 - ieeexplore.ieee.org
Although supervised deep learning has revolutionized speech and audio processing, it has
necessitated the building of specialist models for individual tasks and application scenarios …

Neural machine translation for low-resource languages: A survey

S Ranathunga, ESA Lee, M Prifti Skenduli… - ACM Computing …, 2023 - dl.acm.org
Neural Machine Translation (NMT) has seen tremendous growth in the last ten years since
the early 2000s and has already entered a mature phase. While considered the most widely …

True few-shot learning with language models

E Perez, D Kiela, K Cho - Advances in neural information …, 2021 - proceedings.neurips.cc
Pretrained language models (LMs) perform well on many tasks even when learning from a
few examples, but prior work uses many held-out examples to tune various aspects of …

Unsupervised speech recognition

A Baevski, WN Hsu, A Conneau… - Advances in Neural …, 2021 - proceedings.neurips.cc
Despite rapid progress in the recent past, current speech recognition systems still require
labeled training data which limits this technology to a small fraction of the languages spoken …

Contrastive learning for sequential recommendation

X **e, F Sun, Z Liu, S Wu, J Gao… - 2022 IEEE 38th …, 2022 - ieeexplore.ieee.org
Sequential recommendation methods play a crucial role in modern recommender systems
because of their ability to capture a user's dynamic interest from her/his historical inter …

Multilingual denoising pre-training for neural machine translation

Y Liu, J Gu, N Goyal, X Li, S Edunov… - Transactions of the …, 2020 - direct.mit.edu
This paper demonstrates that multilingual denoising pre-training produces significant
performance gains across a wide variety of machine translation (MT) tasks. We present …

Cross-lingual language model pretraining

A Conneau, G Lample - Advances in neural information …, 2019 - proceedings.neurips.cc
Recent studies have demonstrated the efficiency of generative pretraining for English
natural language understanding. In this work, we extend this approach to multiple …

Ctrl: A conditional transformer language model for controllable generation

NS Keskar, B McCann, LR Varshney, C **ong… - arxiv preprint arxiv …, 2019 - arxiv.org
Large-scale language models show promising text generation capabilities, but users cannot
easily control particular aspects of the generated text. We release CTRL, a 1.63 billion …

Mass: Masked sequence to sequence pre-training for language generation

K Song, X Tan, T Qin, J Lu, TY Liu - arxiv preprint arxiv:1905.02450, 2019 - arxiv.org
Pre-training and fine-tuning, eg, BERT, have achieved great success in language
understanding by transferring knowledge from rich-resource pre-training task to the low/zero …

[PDF][PDF] Language models are unsupervised multitask learners

A Radford, J Wu, R Child, D Luan… - OpenAI …, 2019 - storage.prod.researchhub.com
Natural language processing tasks, such as question answering, machine translation,
reading comprehension, and summarization, are typically approached with supervised …