SAT: Spatial Aptitude Training for Multimodal Language Models

Authors: Arijit Ray, Jiafei Duan, Reuben Tan, Dina Bashkirova, Rose Hendrix, Kiana Ehsani, Aniruddha Kembhavi, Bryan A. Plummer, Ranjay Krishna, Kuo-Hao Zeng, Kate Saenko

Abstract: Spatial perception is a fundamental component of intelligence. While many
studies highlight that large multimodal language models (MLMs) struggle to
reason about space, they only test for static spatial reasoning, such as
categorizing the relative positions of objects. Meanwhile, real-world
deployment requires dynamic capabilities like perspective-taking and egocentric
action recognition. As a roadmap to improving spatial intelligence, we
introduce SAT, Spatial Aptitude Training, which goes beyond static relative
object position questions to the more dynamic tasks. SAT contains 218K
question-answer pairs for 22K synthetic scenes across a training and testing
set. Generated using a photo-realistic physics engine, our dataset can be
arbitrarily scaled and easily extended to new actions, scenes, and 3D assets.
We find that even MLMs that perform relatively well on static questions
struggle to accurately answer dynamic spatial questions. Further, we show that
SAT instruction-tuning data improves not only dynamic spatial reasoning on SAT,
but also zero-shot performance on existing real-image spatial benchmarks:
$23\%$ on CVBench, $8\%$ on the harder BLINK benchmark, and $18\%$ on VSR. When
instruction-tuned on SAT, our 13B model matches larger proprietary MLMs like
GPT4-V and Gemini-3-1.0 in spatial reasoning. Our data/code is available at
http://arijitray1993.github.io/SAT/ .

Source: http://arxiv.org/abs/2412.07755v1

About the Author

Leave a Reply

Your email address will not be published. Required fields are marked *

You may also like these