Text data augmentation for deep learning

C Shorten, TM Khoshgoftaar, B Furht - Journal of big Data, 2021 - Springer
Abstract Natural Language Processing (NLP) is one of the most captivating applications of
Deep Learning. In this survey, we consider how the Data Augmentation training strategy can …

The forward-forward algorithm: Some preliminary investigations

G Hinton - arxiv preprint arxiv:2212.13345, 2022 - arxiv.org
The aim of this paper is to introduce a new learning procedure for neural networks and to
demonstrate that it works well enough on a few small problems to be worth further …

Remember the past: Distilling datasets into addressable memories for neural networks

Z Deng, O Russakovsky - Advances in Neural Information …, 2022 - proceedings.neurips.cc
We propose an algorithm that compresses the critical information of a large dataset into
compact addressable memories. These memories can then be recalled to quickly re-train a …

On implicit bias in overparameterized bilevel optimization

P Vicol, JP Lorraine, F Pedregosa… - International …, 2022 - proceedings.mlr.press
Many problems in machine learning involve bilevel optimization (BLO), including
hyperparameter optimization, meta-learning, and dataset distillation. Bilevel problems …

Compute-efficient deep learning: Algorithmic trends and opportunities

BR Bartoldson, B Kailkhura, D Blalock - Journal of Machine Learning …, 2023 - jmlr.org
Although deep learning has made great progress in recent years, the exploding economic
and environmental costs of training neural networks are becoming unsustainable. To …

Meta-learning to improve pre-training

A Raghu, J Lorraine, S Kornblith… - Advances in …, 2021 - proceedings.neurips.cc
Pre-training (PT) followed by fine-tuning (FT) is an effective method for training neural
networks, and has led to significant performance improvements in many domains. PT can …

Noether networks: meta-learning useful conserved quantities

F Alet, D Doblar, A Zhou… - Advances in …, 2021 - proceedings.neurips.cc
Progress in machine learning (ML) stems from a combination of data availability,
computational resources, and an appropriate encoding of inductive biases. Useful biases …

Auxiliary learning with joint task and data scheduling

H Chen, X Wang, C Guan, Y Liu… - … Conference on Machine …, 2022 - proceedings.mlr.press
Existing auxiliary learning approaches only consider the relationships between the target
task and the auxiliary tasks, ignoring the fact that data samples within an auxiliary task could …

Learning to scaffold: Optimizing model explanations for teaching

P Fernandes, M Treviso, D Pruthi… - Advances in Neural …, 2022 - proceedings.neurips.cc
Modern machine learning models are opaque, and as a result there is a burgeoning
academic subfield on methods that explain these models' behavior. However, what is the …

Auxiliary learning as an asymmetric bargaining game

A Shamsian, A Navon, N Glazer… - International …, 2023 - proceedings.mlr.press
Auxiliary learning is an effective method for enhancing the generalization capabilities of
trained models, particularly when dealing with small datasets. However, this approach may …