Rt-2: Vision-language-action models transfer web knowledge to robotic control
A Brohan, N Brown, J Carbajal, Y Chebotar… - arxiv preprint arxiv …, 2023 - arxiv.org
We study how vision-language models trained on Internet-scale data can be incorporated
directly into end-to-end robotic control to boost generalization and enable emergent …
directly into end-to-end robotic control to boost generalization and enable emergent …
Open x-embodiment: Robotic learning datasets and rt-x models
Large, high-capacity models trained on diverse datasets have shown remarkable successes
on efficiently tackling downstream applications. In domains from NLP to Computer Vision …
on efficiently tackling downstream applications. In domains from NLP to Computer Vision …
Drivelm: Driving with graph visual question answering
We study how vision-language models (VLMs) trained on web-scale data can be integrated
into end-to-end driving systems to boost generalization and enable interactivity with human …
into end-to-end driving systems to boost generalization and enable interactivity with human …
[HTML][HTML] Rt-2: Vision-language-action models transfer web knowledge to robotic control
We study how vision-language models trained on Internet-scale data can be incorporated
directly into end-to-end robotic control to boost generalization and enable emergent …
directly into end-to-end robotic control to boost generalization and enable emergent …
Open X-Embodiment: Robotic Learning Datasets and RT-X Models : Open X-Embodiment Collaboration0
Large, high-capacity models trained on diverse datasets have shown remarkable successes
on efficiently tackling downstream applications. In domains from NLP to Computer Vision …
on efficiently tackling downstream applications. In domains from NLP to Computer Vision …
Foundation models in robotics: Applications, challenges, and the future
We survey applications of pretrained foundation models in robotics. Traditional deep
learning models in robotics are trained on small datasets tailored for specific tasks, which …
learning models in robotics are trained on small datasets tailored for specific tasks, which …
Where are we in the search for an artificial visual cortex for embodied intelligence?
We present the largest and most comprehensive empirical study of pre-trained visual
representations (PVRs) or visual 'foundation models' for Embodied AI. First, we curate …
representations (PVRs) or visual 'foundation models' for Embodied AI. First, we curate …
Eureka: Human-level reward design via coding large language models
Large Language Models (LLMs) have excelled as high-level semantic planners for
sequential decision-making tasks. However, harnessing them to learn complex low-level …
sequential decision-making tasks. However, harnessing them to learn complex low-level …
Liv: Language-image representations and rewards for robotic control
Abstract We present Language-Image Value learning (LIV), a unified objective for vision-
language representation and reward learning from action-free videos with text annotations …
language representation and reward learning from action-free videos with text annotations …
Large language models as general pattern machines
We observe that pre-trained large language models (LLMs) are capable of autoregressively
completing complex token sequences--from arbitrary ones procedurally generated by …
completing complex token sequences--from arbitrary ones procedurally generated by …