Authors: Yuan Sun, Zhao Zhang, Jorge Ortiz
Abstract: In current multimodal tasks, models typically freeze the encoder and decoder
while adapting intermediate layers to task-specific goals, such as region
captioning. Region-level visual understanding presents significant challenges
for large-scale vision-language models. While limited spatial awareness is a
known issue, coarse-grained pretraining, in particular, exacerbates the
difficulty of optimizing latent representations for effective encoder-decoder
alignment. We propose AlignCap, a framework designed to enhance region-level
understanding through fine-grained alignment of latent spaces. Our approach
introduces a novel latent feature refinement module that enhances conditioned
latent space representations to improve region-level captioning performance. We
also propose an innovative alignment strategy, the semantic space alignment
module, which boosts the quality of multimodal representations. Additionally,
we incorporate contrastive learning in a novel manner within both modules to
further enhance region-level captioning performance. To address spatial
limitations, we employ a General Object Detection (GOD) method as a data
preprocessing pipeline that enhances spatial reasoning at the regional level.
Extensive experiments demonstrate that our approach significantly improves
region-level captioning performance across various tasks
Source: http://arxiv.org/abs/2412.10348v1