Authors: Jiaming Li, Jiacheng Zhang, Zequn Jie, Lin Ma, Guanbin Li
Abstract: Large vision-language models (LVLMs) have shown remarkable capabilities in
visual-language understanding for downstream multi-modal tasks. Despite their
success, LVLMs still suffer from generating hallucinations in complex
generation tasks, leading to inconsistencies between visual inputs and
generated content. To address this issue, some approaches have introduced
inference-time interventions, such as contrastive decoding and attention
rectification, to reduce overreliance on language priors. However, these
approaches overlook hallucinations stemming from spurious inter-modality
correlations. In this paper, we propose an Inter-Modality Correlation
Calibration Decoding (IMCCD) method to mitigate hallucinations in LVLMs in a
training-free manner. In this method, we design a Cross-Modal Value-Enhanced
Decoding(CMVED) module to alleviate hallucination by a novel contrastive
decoding mechanism. During the estimation of distorted distribution, CMVED
masks the value vectors associated with significant cross-modal attention
weights, which address both uni-modality overreliance and misleading
inter-modality correlations. Additionally, a Content-Driven Attention
Refinement(CDAR) module refines cross-modal attention weights, guiding LVLMs to
focus on important visual content. Experimental results on diverse
hallucination benchmarks validate the superiority of our method over existing
state-of-the-art techniques in reducing hallucinations in LVLM text generation.
Our code will be available at https://github.com/lijm48/IMCCD.
Source: http://arxiv.org/abs/2501.01926v1