Authors: Vahid Balazadeh, Mohammadmehdi Ataei, Hyunmin Cheong, Amir Hosein Khasahmadi, Rahul G. Krishnan
Abstract: Physical reasoning, which involves the interpretation, understanding, and
prediction of object behavior in dynamic environments, remains a significant
challenge for current Vision-Language Models (VLMs). In this work, we propose
two methods to enhance VLMs’ physical reasoning capabilities using simulated
data. First, we fine-tune a pre-trained VLM using question-answer (QA) pairs
generated from simulations relevant to physical reasoning tasks. Second, we
introduce Physics Context Builders (PCBs), specialized VLMs fine-tuned to
create scene descriptions enriched with physical properties and processes.
During physical reasoning tasks, these PCBs can be leveraged as context to
assist a Large Language Model (LLM) to improve its performance. We evaluate
both of our approaches using multiple benchmarks, including a new stability
detection QA dataset called Falling Tower, which includes both simulated and
real-world scenes, and CLEVRER. We demonstrate that a small QA fine-tuned VLM
can significantly outperform larger state-of-the-art foundational models. We
also show that integrating PCBs boosts the performance of foundational LLMs on
physical reasoning tasks. Using the real-world scenes from the Falling Tower
dataset, we also validate the robustness of both approaches in Sim2Real
transfer. Our results highlight the utility that simulated data can have in the
creation of learning systems capable of advanced physical reasoning.
Source: http://arxiv.org/abs/2412.08619v1