Authors: Jiarui Zhang, Mahyar Khayatkhoei, Prateek Chhikara, Filip Ilievski
Abstract: Multimodal Large Language Models (MLLMs) have experienced rapid progress in
visual recognition tasks in recent years. Given their potential integration
into many critical applications, it is important to understand the limitations
of their visual perception. In this work, we study whether MLLMs can perceive
small visual details as effectively as large ones when answering questions
about images. We observe that their performance is very sensitive to the size
of the visual subject of the question, and further show that this effect is in
fact causal by conducting an intervention study. Next, we study the attention
patterns of MLLMs when answering visual questions, and intriguingly find that
they consistently know where to look, even when they provide the wrong answer.
Based on these findings, we then propose training-free visual intervention
methods that leverage the internal knowledge of any MLLM itself, in the form of
attention and gradient maps, to enhance its perception of small visual details.
We evaluate our proposed methods on two widely-used MLLMs and seven visual
question answering benchmarks and show that they can significantly improve
MLLMs’ accuracy without requiring any training. Our results elucidate the risk
of applying MLLMs to visual recognition tasks concerning small details and
indicate that visual intervention using the model’s internal state is a
promising direction to mitigate this risk.
Source: http://arxiv.org/abs/2502.17422v1