Inference Optimal VLMs Need Only One Visual Token but Larger Models

Authors: Kevin Y. Li, Sachin Goyal, Joao D. Semedo, J. Zico Kolter

Abstract: Vision Language Models (VLMs) have demonstrated strong capabilities across
various visual understanding and reasoning tasks. However, their real-world
deployment is often constrained by high latency during inference due to
substantial compute required to process the large number of input tokens
(predominantly from the image) by the LLM. To reduce inference costs, one can
either downsize the LLM or reduce the number of input image-tokens, the latter
of which has been the focus of many recent works around token compression.
However, it is unclear what the optimal trade-off is, as both the factors
directly affect the VLM performance. We first characterize this optimal
trade-off between the number of visual tokens and LLM parameters by
establishing scaling laws that capture variations in performance with these two
factors. Our results reveal a surprising trend: for visual reasoning tasks, the
inference-optimal behavior in VLMs, i.e., minimum downstream error at any given
fixed inference compute, is achieved when using the largest LLM that fits
within the inference budget while minimizing visual token count – often to a
single token. While the token reduction literature has mainly focused on
maintaining base model performance by modestly reducing the token count (e.g.,
$5-10\times$), our results indicate that the compute-optimal inference regime
requires operating under even higher token compression ratios. Based on these
insights, we take some initial steps towards building approaches tailored for
high token compression settings. Code is available at
https://github.com/locuslab/llava-token-compression.

Source: http://arxiv.org/abs/2411.03312v1

About the Author

Leave a Reply

Your email address will not be published. Required fields are marked *

You may also like these