Mastering diverse domains through world models
D Hafner, J Pasukonis, J Ba, T Lillicrap - ar** a general algorithm that learns to solve tasks across a wide range of
applications has been a fundamental challenge in artificial intelligence. Although current …
applications has been a fundamental challenge in artificial intelligence. Although current …
Bigger, better, faster: Human-level atari with human-level efficiency
We introduce a value-based RL agent, which we call BBF, that achieves super-human
performance in the Atari 100K benchmark. BBF relies on scaling the neural networks used …
performance in the Atari 100K benchmark. BBF relies on scaling the neural networks used …
Gaia-1: A generative world model for autonomous driving
Autonomous driving promises transformative improvements to transportation, but building
systems capable of safely navigating the unstructured complexity of real-world scenarios …
systems capable of safely navigating the unstructured complexity of real-world scenarios …
Transformers learn shortcuts to automata
Algorithmic reasoning requires capabilities which are most naturally understood through
recurrent models of computation, like the Turing machine. However, Transformer models …
recurrent models of computation, like the Turing machine. However, Transformer models …
Masked world models for visual control
Visual model-based reinforcement learning (RL) has the potential to enable sample-efficient
robot learning from visual observations. Yet the current approaches typically train a single …
robot learning from visual observations. Yet the current approaches typically train a single …
Temporal difference learning for model predictive control
Data-driven model predictive control has two key advantages over model-free methods: a
potential for improved sample efficiency through model learning, and better performance as …
potential for improved sample efficiency through model learning, and better performance as …
On Transforming Reinforcement Learning With Transformers: The Development Trajectory
Transformers, originally devised for natural language processing (NLP), have also produced
significant successes in computer vision (CV). Due to their strong expression power …
significant successes in computer vision (CV). Due to their strong expression power …
Advances of machine learning in materials science: Ideas and techniques
In this big data era, the use of large dataset in conjunction with machine learning (ML) has
been increasingly popular in both industry and academia. In recent times, the field of …
been increasingly popular in both industry and academia. In recent times, the field of …
Transformers are sample-efficient world models
Deep reinforcement learning agents are notoriously sample inefficient, which considerably
limits their application to real-world problems. Recently, many model-based methods have …
limits their application to real-world problems. Recently, many model-based methods have …
Manigaussian: Dynamic gaussian splatting for multi-task robotic manipulation
Performing language-conditioned robotic manipulation tasks in unstructured environments
is highly demanded for general intelligent robots. Conventional robotic manipulation …
is highly demanded for general intelligent robots. Conventional robotic manipulation …