Loss of plasticity in deep continual learning
Artificial neural networks, deep-learning methods and the backpropagation algorithm form
the foundation of modern machine learning and artificial intelligence. These methods are …
the foundation of modern machine learning and artificial intelligence. These methods are …
Stop regressing: Training value functions via classification for scalable deep rl
J Farebrother, J Orbay, Q Vuong, AA Taïga… - ar** for deep continual and reinforcement learning
Many failures in deep continual and reinforcement learning are associated with increasing
magnitudes of the weights, making them hard to change and potentially causing overfitting …
magnitudes of the weights, making them hard to change and potentially causing overfitting …
Optimal Molecular Design: Generative Active Learning Combining REINVENT with Precise Binding Free Energy Ranking Simulations
Active learning (AL) is a specific instance of sequential experimental design and uses
machine learning to intelligently choose the next data point or batch of molecular structures …
machine learning to intelligently choose the next data point or batch of molecular structures …
Normalization and effective learning rates in reinforcement learning
Normalization layers have recently experienced a renaissance in the deep reinforcement
learning and continual learning literature, with several works highlighting diverse benefits …
learning and continual learning literature, with several works highlighting diverse benefits …
Learning continually by spectral regularization
Loss of plasticity is a phenomenon where neural networks can become more difficult to train
over the course of learning. Continual learning algorithms seek to mitigate this effect by …
over the course of learning. Continual learning algorithms seek to mitigate this effect by …
[PDF][PDF] In value-based deep reinforcement learning, a pruned network is a good network
Recent work has shown that deep reinforcement learning agents have difficulty in effectively
using their network parameters. We leverage prior insights into the advantages of sparse …
using their network parameters. We leverage prior insights into the advantages of sparse …
Improving deep reinforcement learning by reducing the chain effect of value and policy churn
Deep neural networks provide Reinforcement Learning (RL) powerful function
approximators to address large-scale decision-making problems. However, these …
approximators to address large-scale decision-making problems. However, these …
Plastic Learning with Deep Fourier Features
Deep neural networks can struggle to learn continually in the face of non-stationarity. This
phenomenon is known as loss of plasticity. In this paper, we identify underlying principles …
phenomenon is known as loss of plasticity. In this paper, we identify underlying principles …