DynaMo: In-Domain Dynamics Pretraining for Visuo-Motor Control

Authors: Zichen Jeff Cui, Hengkai Pan, Aadhithya Iyer, Siddhant Haldar, Lerrel Pinto

Abstract: Imitation learning has proven to be a powerful tool for training complex
visuomotor policies. However, current methods often require hundreds to
thousands of expert demonstrations to handle high-dimensional visual
observations. A key reason for this poor data efficiency is that visual
representations are predominantly either pretrained on out-of-domain data or
trained directly through a behavior cloning objective. In this work, we present
DynaMo, a new in-domain, self-supervised method for learning visual
representations. Given a set of expert demonstrations, we jointly learn a
latent inverse dynamics model and a forward dynamics model over a sequence of
image embeddings, predicting the next frame in latent space, without
augmentations, contrastive sampling, or access to ground truth actions.
Importantly, DynaMo does not require any out-of-domain data such as Internet
datasets or cross-embodied datasets. On a suite of six simulated and real
environments, we show that representations learned with DynaMo significantly
improve downstream imitation learning performance over prior self-supervised
learning objectives, and pretrained representations. Gains from using DynaMo
hold across policy classes such as Behavior Transformer, Diffusion Policy, MLP,
and nearest neighbors. Finally, we ablate over key components of DynaMo and
measure its impact on downstream policy performance. Robot videos are best
viewed at https://dynamo-ssl.github.io

Source: http://arxiv.org/abs/2409.12192v1

About the Author

Leave a Reply

Your email address will not be published. Required fields are marked *

You may also like these