Learning-based legged locomotion: State of the art and future perspectives

S Ha, J Lee, M van de Panne, Z **e… - … Journal of Robotics …, 2024 - journals.sagepub.com
Legged locomotion holds the premise of universal mobility, a critical capability for many real-
world robotic applications. Both model-based and learning-based approaches have …

Enhancing autonomous system security and resilience with generative AI: A comprehensive survey

M Andreoni, WT Lunardi, G Lawton, S Thakkar - IEEE Access, 2024 - ieeexplore.ieee.org
This survey explores the transformative role of Generative Artificial Intelligence (GenAI) in
enhancing the trustworthiness, reliability, and security of autonomous systems such as …

Commonsense reasoning for legged robot adaptation with vision-language models

AS Chen, AM Lessing, A Tang, G Chada… - arxiv preprint arxiv …, 2024 - arxiv.org
Legged robots are physically capable of navigating a diverse variety of environments and
overcoming a wide range of obstructions. For example, in a search and rescue mission, a …

Curricullm: Automatic task curricula design for learning complex robot skills using large language models

K Ryu, Q Liao, Z Li, K Sreenath, N Mehr - arxiv preprint arxiv:2409.18382, 2024 - arxiv.org
Curriculum learning is a training mechanism in reinforcement learning (RL) that facilitates
the achievement of complex policies by progressively increasing the task difficulty during …

Fire: A dataset for feedback integration and refinement evaluation of multimodal models

P Li, Z Gao, B Zhang, T Yuan, Y Wu, M Harandi… - arxiv preprint arxiv …, 2024 - arxiv.org
Vision language models (VLMs) have achieved impressive progress in diverse applications,
becoming a prevalent research direction. In this paper, we build FIRE, a feedback …

Autonomous interactive correction MLLM for robust robotic manipulation

C **ong, C Shen, X Li, K Zhou, J Liu… - … Annual Conference on …, 2024 - openreview.net
The ability to reflect on and correct failures is crucial for robotic systems to interact stably with
real-life objects. Observing the generalization and reasoning capabilities of Multimodal …

Grounding robot policies with visuomotor language guidance

A Bucker, P Ortega-Kral, J Francis, J Oh - arxiv preprint arxiv:2410.06473, 2024 - arxiv.org
Recent advances in the fields of natural language processing and computer vision have
shown great potential in understanding the underlying dynamics of the world from large …

VLM Agents Generate Their Own Memories: Distilling Experience into Embodied Programs of Thought

GH Sarch, L Jang, MJ Tarr, WW Cohen… - The Thirty-eighth …, 2024 - openreview.net
Large-scale generative language and vision-language models (LLMs and VLMs) excel in
few-shot in-context learning for decision making and instruction following. However, they …

Vlm agents generate their own memories: Distilling experience into embodied programs

G Sarch, L Jang, MJ Tarr, WW Cohen, K Marino… - arxiv preprint arxiv …, 2024 - arxiv.org
Large-scale generative language and vision-language models excel in in-context learning
for decision making. However, they require high-quality exemplar demonstrations to be …

Generative AI agents in autonomous machines: A safety perspective

J Jabbour, VJ Reddi - arxiv preprint arxiv:2410.15489, 2024 - arxiv.org
The integration of Generative Artificial Intelligence (AI) into autonomous machines
represents a major paradigm shift in how these systems operate and unlocks new solutions …