EVEv2: Improved Baselines for Encoder-Free Vision-Language Models

Authors: Haiwen Diao, Xiaotong Li, Yufeng Cui, Yueze Wang, Haoge Deng, Ting Pan, Wenxuan Wang, Huchuan Lu, Xinlong Wang

Abstract: Existing encoder-free vision-language models (VLMs) are rapidly narrowing the
performance gap with their encoder-based counterparts, highlighting the
promising potential for unified multimodal systems with structural simplicity
and efficient deployment. We systematically clarify the performance gap between
VLMs using pre-trained vision encoders, discrete tokenizers, and minimalist
visual layers from scratch, deeply excavating the under-examined
characteristics of encoder-free VLMs. We develop efficient strategies for
encoder-free VLMs that rival mainstream encoder-based ones. After an in-depth
investigation, we launch EVEv2.0, a new and improved family of encoder-free
VLMs. We show that: (i) Properly decomposing and hierarchically associating
vision and language within a unified model reduces interference between
modalities. (ii) A well-designed training strategy enables effective
optimization for encoder-free VLMs. Through extensive evaluation, our EVEv2.0
represents a thorough study for developing a decoder-only architecture across
modalities, demonstrating superior data efficiency and strong vision-reasoning
capability. Code is publicly available at: https://github.com/baaivision/EVE.

Source: http://arxiv.org/abs/2502.06788v1

About the Author

Leave a Reply

Your email address will not be published. Required fields are marked *

You may also like these