Text data augmentation for deep learning
Abstract Natural Language Processing (NLP) is one of the most captivating applications of
Deep Learning. In this survey, we consider how the Data Augmentation training strategy can …
Deep Learning. In this survey, we consider how the Data Augmentation training strategy can …
The forward-forward algorithm: Some preliminary investigations
G Hinton - arxiv preprint arxiv:2212.13345, 2022 - arxiv.org
The aim of this paper is to introduce a new learning procedure for neural networks and to
demonstrate that it works well enough on a few small problems to be worth further …
demonstrate that it works well enough on a few small problems to be worth further …
Remember the past: Distilling datasets into addressable memories for neural networks
We propose an algorithm that compresses the critical information of a large dataset into
compact addressable memories. These memories can then be recalled to quickly re-train a …
compact addressable memories. These memories can then be recalled to quickly re-train a …
On implicit bias in overparameterized bilevel optimization
Many problems in machine learning involve bilevel optimization (BLO), including
hyperparameter optimization, meta-learning, and dataset distillation. Bilevel problems …
hyperparameter optimization, meta-learning, and dataset distillation. Bilevel problems …
Compute-efficient deep learning: Algorithmic trends and opportunities
Although deep learning has made great progress in recent years, the exploding economic
and environmental costs of training neural networks are becoming unsustainable. To …
and environmental costs of training neural networks are becoming unsustainable. To …
Meta-learning to improve pre-training
Pre-training (PT) followed by fine-tuning (FT) is an effective method for training neural
networks, and has led to significant performance improvements in many domains. PT can …
networks, and has led to significant performance improvements in many domains. PT can …
Noether networks: meta-learning useful conserved quantities
Progress in machine learning (ML) stems from a combination of data availability,
computational resources, and an appropriate encoding of inductive biases. Useful biases …
computational resources, and an appropriate encoding of inductive biases. Useful biases …
Auxiliary learning with joint task and data scheduling
Existing auxiliary learning approaches only consider the relationships between the target
task and the auxiliary tasks, ignoring the fact that data samples within an auxiliary task could …
task and the auxiliary tasks, ignoring the fact that data samples within an auxiliary task could …
Learning to scaffold: Optimizing model explanations for teaching
Modern machine learning models are opaque, and as a result there is a burgeoning
academic subfield on methods that explain these models' behavior. However, what is the …
academic subfield on methods that explain these models' behavior. However, what is the …
Auxiliary learning as an asymmetric bargaining game
Auxiliary learning is an effective method for enhancing the generalization capabilities of
trained models, particularly when dealing with small datasets. However, this approach may …
trained models, particularly when dealing with small datasets. However, this approach may …