A Simple Aerial Detection Baseline of Multimodal Language Models

Authors: Qingyun Li, Yushi Chen, Xinya Shu, Dong Chen, Xin He, Yi Yu, Xue Yang

Abstract: The multimodal language models (MLMs) based on generative pre-trained
Transformer are considered powerful candidates for unifying various domains and
tasks. MLMs developed for remote sensing (RS) have demonstrated outstanding
performance in multiple tasks, such as visual question answering and visual
grounding. In addition to visual grounding that detects specific objects
corresponded to given instruction, aerial detection, which detects all objects
of multiple categories, is also a valuable and challenging task for RS
foundation models. However, aerial detection has not been explored by existing
RS MLMs because the autoregressive prediction mechanism of MLMs differs
significantly from the detection outputs. In this paper, we present a simple
baseline for applying MLMs to aerial detection for the first time, named
LMMRotate. Specifically, we first introduce a normalization method to transform
detection outputs into textual outputs to be compatible with the MLM framework.
Then, we propose a evaluation method, which ensures a fair comparison between
MLMs and conventional object detection models. We construct the baseline by
fine-tuning open-source general-purpose MLMs and achieve impressive detection
performance comparable to conventional detector. We hope that this baseline
will serve as a reference for future MLM development, enabling more
comprehensive capabilities for understanding RS images. Code is available at
https://github.com/Li-Qingyun/mllm-mmrotate.

Source: http://arxiv.org/abs/2501.09720v1

About the Author

Leave a Reply

Your email address will not be published. Required fields are marked *

You may also like these