Beyond Sight: Finetuning Generalist Robot Policies with Heterogeneous Sensors via Language Grounding

Authors: Joshua Jones, Oier Mees, Carmelo Sferrazza, Kyle Stachowicz, Pieter Abbeel, Sergey Levine

Abstract: Interacting with the world is a multi-sensory experience: achieving effective
general-purpose interaction requires making use of all available modalities —
including vision, touch, and audio — to fill in gaps from partial observation.
For example, when vision is occluded reaching into a bag, a robot should rely
on its senses of touch and sound. However, state-of-the-art generalist robot
policies are typically trained on large datasets to predict robot actions
solely from visual and proprioceptive observations. In this work, we propose
FuSe, a novel approach that enables finetuning visuomotor generalist policies
on heterogeneous sensor modalities for which large datasets are not readily
available by leveraging natural language as a common cross-modal grounding. We
combine a multimodal contrastive loss with a sensory-grounded language
generation loss to encode high-level semantics. In the context of robot
manipulation, we show that FuSe enables performing challenging tasks that
require reasoning jointly over modalities such as vision, touch, and sound in a
zero-shot setting, such as multimodal prompting, compositional cross-modal
prompting, and descriptions of objects it interacts with. We show that the same
recipe is applicable to widely different generalist policies, including both
diffusion-based generalist policies and large vision-language-action (VLA)
models. Extensive experiments in the real world show that FuSeis able to
increase success rates by over 20% compared to all considered baselines.

Source: http://arxiv.org/abs/2501.04693v1

About the Author

Leave a Reply

Your email address will not be published. Required fields are marked *

You may also like these