Skip to yearly menu bar Skip to main content


Poster

Using Left and Right Brains Together: Towards Vision and Language Planning

Jun CEN · Chenfei Wu · Xiao Liu · Shengming Yin · Yixuan Pei · Jinglong Yang · Qifeng Chen · Nan Duan · Jianguo Zhang


Abstract:

Large Language Models (LLMs) and Large Multi-modality Models (LMMs) have demonstrated remarkable decision masking capabilities on a variety of tasks. However, they inherently operate planning within the language space, lacking the vision and spatial imagination ability. In contrast, humans utilize both left and right hemispheres of the brain for language and visual planning during the thinking process. Therefore, we introduce a novel vision-language planning framework in this work to perform concurrent visual and language planning for tasks with inputs of any form. Our framework incorporates visual planning to capture intricate environmental details, while language planning enhances the logical coherence of the overall system. We evaluate the effectiveness of our framework across vision-language tasks, vision-only tasks, and language-only tasks. The results demonstrate the superior performance of our approach, indicating that the integration of visual and language planning yields better contextually aware task execution.

Live content is unavailable. Log in and register to view live content