Authors: Sijia Chen, En Yu, Wenbing Tao
Abstract: Referring Multi-Object Tracking (RMOT) is an important topic in the current
tracking field. Its task form is to guide the tracker to track objects that
match the language description. Current research mainly focuses on referring
multi-object tracking under single-view, which refers to a view sequence or
multiple unrelated view sequences. However, in the single-view, some
appearances of objects are easily invisible, resulting in incorrect matching of
objects with the language description. In this work, we propose a new task,
called Cross-view Referring Multi-Object Tracking (CRMOT). It introduces the
cross-view to obtain the appearances of objects from multiple views, avoiding
the problem of the invisible appearances of objects in RMOT task. CRMOT is a
more challenging task of accurately tracking the objects that match the
language description and maintaining the identity consistency of objects in
each cross-view. To advance CRMOT task, we construct a cross-view referring
multi-object tracking benchmark based on CAMPUS and DIVOTrack datasets, named
CRTrack. Specifically, it provides 13 different scenes and 221 language
descriptions. Furthermore, we propose an end-to-end cross-view referring
multi-object tracking method, named CRTracker. Extensive experiments on the
CRTrack benchmark verify the effectiveness of our method. The dataset and code
are available at https://github.com/chen-si-jia/CRMOT.
Source: http://arxiv.org/abs/2412.17807v1