Diffusion model is an effective planner and data synthesizer for multi-task reinforcement learning

H He, C Bai, K Xu, Z Yang, W Zhang… - Advances in neural …, 2023 - proceedings.neurips.cc
Diffusion models have demonstrated highly-expressive generative capabilities in vision and
NLP. Recent studies in reinforcement learning (RL) have shown that diffusion models are …

Learning better with less: Effective augmentation for sample-efficient visual reinforcement learning

G Ma, L Zhang, H Wang, L Li, Z Wang… - Advances in …, 2023 - proceedings.neurips.cc
Data augmentation (DA) is a crucial technique for enhancing the sample efficiency of visual
reinforcement learning (RL) algorithms. Notably, employing simple observation …

Learning to manipulate anywhere: A visual generalizable framework for reinforcement learning

Z Yuan, T Wei, S Cheng, G Zhang, Y Chen… - arxiv preprint arxiv …, 2024 - arxiv.org
Can we endow visuomotor robots with generalization capabilities to operate in diverse open-
world scenarios? In this paper, we propose\textbf {Maniwhere}, a generalizable framework …

A deep reinforcement learning-based active suspension control algorithm considering deterministic experience tracing for autonomous vehicle

C Wang, X Cui, S Zhao, X Zhou, Y Song, Y Wang… - Applied Soft …, 2024 - Elsevier
As the challenges in autonomous driving become more complex and changing, traditional
methods are struggling to cope. As a result, artificial intelligence (AI) techniques have …

On pre-training for visuo-motor control: Revisiting a learning-from-scratch baseline

N Hansen, Z Yuan, Y Ze, T Mu, A Rajeswaran… - arxiv preprint arxiv …, 2022 - arxiv.org
In this paper, we examine the effectiveness of pre-training for visuo-motor control tasks. We
revisit a simple Learning-from-Scratch (LfS) baseline that incorporates data augmentation …

Revisiting plasticity in visual reinforcement learning: Data, modules and training stages

G Ma, L Li, S Zhang, Z Liu, Z Wang, Y Chen… - arxiv preprint arxiv …, 2023 - arxiv.org
Plasticity, the ability of a neural network to evolve with new data, is crucial for high-
performance and sample-efficient visual reinforcement learning (VRL). Although methods …

Normalization enhances generalization in visual reinforcement learning

L Li, J Lyu, G Ma, Z Wang, Z Yang, X Li, Z Li - arxiv preprint arxiv …, 2023 - arxiv.org
Recent advances in visual reinforcement learning (RL) have led to impressive success in
handling complex tasks. However, these methods have demonstrated limited generalization …

Esp: Exploiting symmetry prior for multi-agent reinforcement learning

X Yu, R Shi, P Feng, Y Tian, J Luo, W Wu - ECAI 2023, 2023 - ebooks.iospress.nl
Multi-agent reinforcement learning (MARL) has achieved promising results in recent years.
However, most existing reinforcement learning methods require a large amount of data for …

[HTML][HTML] Research on deep reinforcement learning control algorithm for active suspension considering uncertain time delay

Y Wang, C Wang, S Zhao, K Guo - Sensors, 2023 - mdpi.com
The uncertain delay characteristic of actuators is a critical factor that affects the control
effectiveness of the active suspension system. Therefore, it is crucial to develop a control …

MA2CL: masked attentive contrastive learning for multi-agent reinforcement learning

H Song, M Feng, W Zhou, H Li - arxiv preprint arxiv:2306.02006, 2023 - arxiv.org
Recent approaches have utilized self-supervised auxiliary tasks as representation learning
to improve the performance and sample efficiency of vision-based reinforcement learning …