Authors: Carmelo Sferrazza, Dun-Ming Huang, Fangchen Liu, Jongmin Lee, Pieter Abbeel
Abstract: In recent years, the transformer architecture has become the de facto
standard for machine learning algorithms applied to natural language processing
and computer vision. Despite notable evidence of successful deployment of this
architecture in the context of robot learning, we claim that vanilla
transformers do not fully exploit the structure of the robot learning problem.
Therefore, we propose Body Transformer (BoT), an architecture that leverages
the robot embodiment by providing an inductive bias that guides the learning
process. We represent the robot body as a graph of sensors and actuators, and
rely on masked attention to pool information throughout the architecture. The
resulting architecture outperforms the vanilla transformer, as well as the
classical multilayer perceptron, in terms of task completion, scaling
properties, and computational efficiency when representing either imitation or
reinforcement learning policies. Additional material including the open-source
code is available at https://sferrazza.cc/bot_site.
Source: http://arxiv.org/abs/2408.06316v1