A Paragraph is All It Takes: Rich Robot Behaviors from Interacting, Trusted LLMs

Authors: OpenMind, Shaohong Zhong, Adam Zhou, Boyuan Chen, Homin Luo, Jan Liphardt

Abstract: Large Language Models (LLMs) are compact representations of all public
knowledge of our physical environment and animal and human behaviors. The
application of LLMs to robotics may offer a path to highly capable robots that
perform well across most human tasks with limited or even zero tuning. Aside
from increasingly sophisticated reasoning and task planning, networks of
(suitably designed) LLMs offer ease of upgrading capabilities and allow humans
to directly observe the robot’s thinking. Here we explore the advantages,
limitations, and particularities of using LLMs to control physical robots. The
basic system consists of four LLMs communicating via a human language data bus
implemented via web sockets and ROS2 message passing. Surprisingly, rich robot
behaviors and good performance across different tasks could be achieved despite
the robot’s data fusion cycle running at only 1Hz and the central data bus
running at the extremely limited rates of the human brain, of around 40 bits/s.
The use of natural language for inter-LLM communication allowed the robot’s
reasoning and decision making to be directly observed by humans and made it
trivial to bias the system’s behavior with sets of rules written in plain
English. These rules were immutably written into Ethereum, a global, public,
and censorship resistant Turing-complete computer. We suggest that by using
natural language as the data bus among interacting AIs, and immutable public
ledgers to store behavior constraints, it is possible to build robots that
combine unexpectedly rich performance, upgradability, and durable alignment
with humans.

Source: http://arxiv.org/abs/2412.18588v1

About the Author

Leave a Reply

Your email address will not be published. Required fields are marked *

You may also like these