VLM-driven Behavior Tree for Context-aware Task Planning

Authors: Naoki Wake, Atsushi Kanehira, Jun Takamatsu, Kazuhiro Sasabuchi, Katsushi Ikeuchi

Abstract: The use of Large Language Models (LLMs) for generating Behavior Trees (BTs)
has recently gained attention in the robotics community, yet remains in its
early stages of development. In this paper, we propose a novel framework that
leverages Vision-Language Models (VLMs) to interactively generate and edit BTs
that address visual conditions, enabling context-aware robot operations in
visually complex environments. A key feature of our approach lies in the
conditional control through self-prompted visual conditions. Specifically, the
VLM generates BTs with visual condition nodes, where conditions are expressed
as free-form text. Another VLM process integrates the text into its prompt and
evaluates the conditions against real-world images during robot execution. We
validated our framework in a real-world cafe scenario, demonstrating both its
feasibility and limitations.

Source: http://arxiv.org/abs/2501.03968v1

About the Author

Leave a Reply

Your email address will not be published. Required fields are marked *

You may also like these