Authors: Xiao Liu, Tianjie Zhang, Yu Gu, Iat Long Iong, Yifan Xu, Xixuan Song, Shudan Zhang, Hanyu Lai, Xinyi Liu, Hanlin Zhao, Jiadai Sun, Xinyue Yang, Yu Yang, Zehan Qi, Shuntian Yao, Xueqiao Sun, Siyi Cheng, Qinkai Zheng, Hao Yu, Hanchen Zhang, Wenyi Hong, Ming Ding, Lihang Pan, Xiaotao Gu, Aohan Zeng, Zhengxiao Du, Chan Hee Song, Yu Su, Yuxiao Dong, Jie Tang
Abstract: Large Multimodal Models (LMMs) have ushered in a new era in artificial
intelligence, merging capabilities in both language and vision to form highly
capable Visual Foundation Agents. These agents are postulated to excel across a
myriad of tasks, potentially approaching general artificial intelligence.
However, existing benchmarks fail to sufficiently challenge or showcase the
full potential of LMMs in complex, real-world environments. To address this
gap, we introduce VisualAgentBench (VAB), a comprehensive and pioneering
benchmark specifically designed to train and evaluate LMMs as visual foundation
agents across diverse scenarios, including Embodied, Graphical User Interface,
and Visual Design, with tasks formulated to probe the depth of LMMs’
understanding and interaction capabilities. Through rigorous testing across
nine proprietary LMM APIs and eight open models, we demonstrate the
considerable yet still developing agent capabilities of these models.
Additionally, VAB constructs a trajectory training set constructed through
hybrid methods including Program-based Solvers, LMM Agent Bootstrapping, and
Human Demonstrations, promoting substantial performance improvements in LMMs
through behavior cloning. Our work not only aims to benchmark existing models
but also provides a solid foundation for future development into visual
foundation agents. Code, train \& test data, and part of fine-tuned open LMMs
are available at \url{https://github.com/THUDM/VisualAgentBench}.
Source: http://arxiv.org/abs/2408.06327v1