Towards continual reinforcement learning: A review and perspectives

K Khetarpal, M Riemer, I Rish, D Precup - Journal of Artificial Intelligence …, 2022 - jair.org
In this article, we aim to provide a literature review of different formulations and approaches
to continual reinforcement learning (RL), also known as lifelong or non-stationary RL. We …

[HTML][HTML] Transfer learning in demand response: A review of algorithms for data-efficient modelling and control

T Peirelinck, H Kazmi, BV Mbuwir, C Hermans… - Energy and AI, 2022 - Elsevier
A number of decarbonization scenarios for the energy sector are built on simultaneous
electrification of energy demand, and decarbonization of electricity generation through …

Multi-task learning with deep neural networks: A survey

M Crawshaw - arxiv preprint arxiv:2009.09796, 2020 - arxiv.org
Multi-task learning (MTL) is a subfield of machine learning in which multiple tasks are
simultaneously learned by a shared model. Such approaches offer advantages like …

A survey on multi-task learning

Y Zhang, Q Yang - IEEE transactions on knowledge and data …, 2021 - ieeexplore.ieee.org
Multi-Task Learning (MTL) is a learning paradigm in machine learning and its aim is to
leverage useful information contained in multiple related tasks to help improve the …

Conservative data sharing for multi-task offline reinforcement learning

T Yu, A Kumar, Y Chebotar… - Advances in …, 2021 - proceedings.neurips.cc
Offline reinforcement learning (RL) algorithms have shown promising results in domains
where abundant pre-collected data is available. However, prior methods focus on solving …

Invariant causal prediction for block mdps

A Zhang, C Lyle, S Sodhani, A Filos… - International …, 2020 - proceedings.mlr.press
Generalization across environments is critical to the successful application of reinforcement
learning (RL) algorithms to real-world challenges. In this work we propose a method for …

Deep reinforcement and infomax learning

B Mazoure, R Tachet des Combes… - Advances in …, 2020 - proceedings.neurips.cc
We posit that a reinforcement learning (RL) agent will perform better when it uses
representations that are better at predicting the future, particularly in terms of few-shot …

Paco: Parameter-compositional multi-task reinforcement learning

L Sun, H Zhang, W Xu… - Advances in Neural …, 2022 - proceedings.neurips.cc
The purpose of multi-task reinforcement learning (MTRL) is to train a single policy that can
be applied to a set of different tasks. Sharing parameters allows us to take advantage of the …

Provable benefits of representational transfer in reinforcement learning

A Agarwal, Y Song, W Sun, K Wang… - The Thirty Sixth …, 2023 - proceedings.mlr.press
We study the problem of representational transfer in RL, where an agent first pretrains in a
number of\emph {source tasks} to discover a shared representation, which is subsequently …

Towards versatile embodied navigation

H Wang, W Liang, LV Gool… - Advances in neural …, 2022 - proceedings.neurips.cc
With the emergence of varied visual navigation tasks (eg, image-/object-/audio-goal and
vision-language navigation) that specify the target in different ways, the community has …