Skip to yearly menu bar Skip to main content


Poster

RoboCodeX: Multimodal Code Generation for Robotic Behavior Synthesis

Yao Mu · Junting Chen · Qing-Long Zhang · Shoufa Chen · Qiaojun Yu · Chongjian GE · Runjian Chen · Zhixuan Liang · Mengkang Hu · Chaofan Tao · Peize Sun · Haibao Yu · Chao Yang · Wenqi Shao · Wenhai Wang · Jifeng Dai · Yu Qiao · Mingyu Ding · Ping Luo


Abstract:

Robotic behavior synthesis, the problem of understanding multimodal inputs and generating precise physical control for robots, is an important part of Embodied AI. Despite successes in applying multimodal large language models for high-level understanding, it remains challenging to translate these conceptual understandings into detailed robotic actions while achieving generalization across various scenarios. In this paper, we propose a tree-structured multimodal code generation framework for generalized robotic behavior synthesis, termed  RoboCodeX. RoboCodeX decomposes high-level human instructions into multiple object-centric manipulation units consisting of physical preferences such as affordance and safety constraints, and applies code generation to introduce generalization ability across various robotics platforms. To further enhance the capability to map conceptual and perceptual understanding into control commands, a specialized multimodal reasoning dataset is collected for pre-training and an iterative self-updating methodology is introduced for supervised fine-tuning. Extensive experiments demonstrate that RoboCodeX achieves state-of-the-art performance in both simulators and real robots on four different kinds of manipulation tasks and one embodied navigation task.

Live content is unavailable. Log in and register to view live content