TextHawk2: A Large Vision-Language Model Excels in Bilingual OCR and Grounding with 16x Fewer Tokens

Authors: Ya-Qi Yu, Minghui Liao, Jiwen Zhang, Jihao Wu

Abstract: Reading dense text and locating objects within images are fundamental
abilities for Large Vision-Language Models (LVLMs) tasked with advanced jobs.
Previous LVLMs, including superior proprietary models like GPT-4o, have
struggled to excel in both tasks simultaneously. Moreover, previous LVLMs with
fine-grained perception cost thousands of tokens per image, making them
resource-intensive. We present TextHawk2, a bilingual LVLM featuring efficient
fine-grained perception and demonstrating cutting-edge performance across
general-purpose, OCR, and grounding tasks with 16 times fewer image tokens.
Critical improvements include: (1) Token Compression: Building on the efficient
architecture of its predecessor, TextHawk2 significantly reduces the number of
tokens per image by 16 times, facilitating training and deployment of the
TextHawk series with minimal resources. (2) Visual Encoder Reinforcement: We
enhance the visual encoder through LVLM co-training, unlocking its potential
for previously unseen tasks like Chinese OCR and grounding. (3) Data Diversity:
We maintain a comparable scale of 100 million samples while diversifying the
sources of pre-training data. We assess TextHawk2 across multiple benchmarks,
where it consistently delivers superior performance and outperforms
closed-source models of similar scale, such as achieving 78.4% accuracy on
OCRBench, 81.4% accuracy on ChartQA, 89.6% ANLS on DocVQA, and 88.1%
accuracy@0.5 on RefCOCOg-test.

Source: http://arxiv.org/abs/2410.05261v1

About the Author

Leave a Reply

Your email address will not be published. Required fields are marked *

You may also like these