Authors: Chaoyou Fu, Haojia Lin, Zuwei Long, Yunhang Shen, Meng Zhao, Yifan Zhang, Xiong Wang, Di Yin, Long Ma, Xiawu Zheng, Ran He, Rongrong Ji, Yunsheng Wu, Caifeng Shan, Xing Sun
Abstract: The remarkable multimodal capabilities and interactive experience of GPT-4o
underscore their necessity in practical applications, yet open-source models
rarely excel in both areas. In this paper, we introduce VITA, the first-ever
open-source Multimodal Large Language Model (MLLM) adept at simultaneous
processing and analysis of Video, Image, Text, and Audio modalities, and
meanwhile has an advanced multimodal interactive experience. Starting from
Mixtral 8x7B as a language foundation, we expand its Chinese vocabulary
followed by bilingual instruction tuning. We further endow the language model
with visual and audio capabilities through two-stage multi-task learning of
multimodal alignment and instruction tuning. VITA demonstrates robust
foundational capabilities of multilingual, vision, and audio understanding, as
evidenced by its strong performance across a range of both unimodal and
multimodal benchmarks. Beyond foundational capabilities, we have made
considerable progress in enhancing the natural multimodal human-computer
interaction experience. To the best of our knowledge, we are the first to
exploit non-awakening interaction and audio interrupt in MLLM. VITA is the
first step for the open-source community to explore the seamless integration of
multimodal understanding and interaction. While there is still lots of work to
be done on VITA to get close to close-source counterparts, we hope that its
role as a pioneer can serve as a cornerstone for subsequent research. Project
Page: https://vita-home.github.io.
Source: http://arxiv.org/abs/2408.05211v1