Deep reinforcement learning for robotics: A survey of real-world successes
Reinforcement learning (RL), particularly its combination with deep neural networks,
referred to as deep RL (DRL), has shown tremendous promise across a wide range of …
referred to as deep RL (DRL), has shown tremendous promise across a wide range of …
Real-world robot applications of foundation models: A review
Recent developments in foundation models, like Large Language Models (LLMs) and Vision-
Language Models (VLMs), trained on extensive data, facilitate flexible application across …
Language Models (VLMs), trained on extensive data, facilitate flexible application across …
Foundation models in robotics: Applications, challenges, and the future
We survey applications of pretrained foundation models in robotics. Traditional deep
learning models in robotics are trained on small datasets tailored for specific tasks, which …
learning models in robotics are trained on small datasets tailored for specific tasks, which …
Octo: An open-source generalist robot policy
Large policies pretrained on diverse robot datasets have the potential to transform robotic
learning: instead of training new policies from scratch, such generalist robot policies may be …
learning: instead of training new policies from scratch, such generalist robot policies may be …
Unified-IO 2: Scaling Autoregressive Multimodal Models with Vision Language Audio and Action
We present Unified-IO 2 a multimodal and multi-skill unified model capable of following
novel instructions. Unified-IO 2 can use text images audio and/or videos as input and can …
novel instructions. Unified-IO 2 can use text images audio and/or videos as input and can …
Drivevlm: The convergence of autonomous driving and large vision-language models
A primary hurdle of autonomous driving in urban environments is understanding complex
and long-tail scenarios, such as challenging road conditions and delicate human behaviors …
and long-tail scenarios, such as challenging road conditions and delicate human behaviors …
Mobile aloha: Learning bimanual mobile manipulation with low-cost whole-body teleoperation
Imitation learning from human demonstrations has shown impressive performance in
robotics. However, most results focus on table-top manipulation, lacking the mobility and …
robotics. However, most results focus on table-top manipulation, lacking the mobility and …
Moka: Open-vocabulary robotic manipulation through mark-based visual prompting
Open-vocabulary generalization requires robotic systems to perform tasks involving complex
and diverse environments and task goals. While the recent advances in vision language …
and diverse environments and task goals. While the recent advances in vision language …
Open-television: Teleoperation with immersive active visual feedback
Teleoperation serves as a powerful method for collecting on-robot data essential for robot
learning from demonstrations. The intuitiveness and ease of use of the teleoperation system …
learning from demonstrations. The intuitiveness and ease of use of the teleoperation system …
Longvila: Scaling long-context visual language models for long videos
Long-context capability is critical for multi-modal foundation models, especially for long
video understanding. We introduce LongVILA, a full-stack solution for long-context visual …
video understanding. We introduce LongVILA, a full-stack solution for long-context visual …