Skip to yearly menu bar Skip to main content


Poster

An Embodied Generalist Agent in 3D World

Jiangyong Huang · Silong Yong · Xiaojian Ma · Xiongkun Linghu · Puhao Li · Yan Wang · Qing Li · Song-Chun Zhu · Baoxiong Jia · Siyuan Huang


Abstract:

Leveraging massive knowledge and learning schemes from large language models (LLMs), recent machine learning models show notable successes in building generalist agents that exhibit the capability of general-purpose task solving in diverse domains, including natural language processing, computer vision, and robotics. However, several significant challenges remain: (i) most of these models rely on 2D images, and exhibit a limited ability to process 3D input; (ii) tasks that are intuitively defined in the 3D world, e.g., 3D grounding and reasoning, embodied acting, etc, are rarely explored. We argue these limitations significantly hinder the current models from performing real-world tasks and further achieving general intelligence. To this end, we introduce an embodied multi-modal and multi-task generalist agent that excels in perceiving, grounding, reasoning, planning, and acting in the 3D world. Our proposed agent, referred to as LEO, is trained with shared LLM-based model architectures, objectives, and weights in two stages: (i) 3D vision-language alignment and (ii) 3D vision-language-action instruction tuning. To facilitate the training, we meticulously curate and generate an extensive dataset comprising object-level and scene-level multi-modal tasks with exceeding scale and complexity, necessitating a deep understanding of and interaction with the 3D world. Through rigorous experiments, we demonstrate LEO's remarkable proficiency across a wide spectrum of tasks, including 3D captioning, question answering, embodied reasoning, embodied navigation, and robotic manipulation. Our ablation results further provide valuable insights for the development of future embodied generalist agents.

Live content is unavailable. Log in and register to view live content