Authors: Jun Wang, Ying Yuan, Haichuan Che, Haozhi Qi, Yi Ma, Jitendra Malik, Xiaolong Wang
Abstract: In-hand manipulation of pen-like objects is an important skill in our daily
lives, as many tools such as hammers and screwdrivers are similarly shaped.
However, current learning-based methods struggle with this task due to a lack
of high-quality demonstrations and the significant gap between simulation and
the real world. In this work, we push the boundaries of learning-based in-hand
manipulation systems by demonstrating the capability to spin pen-like objects.
We first use reinforcement learning to train an oracle policy with privileged
information and generate a high-fidelity trajectory dataset in simulation. This
serves two purposes: 1) pre-training a sensorimotor policy in simulation; 2)
conducting open-loop trajectory replay in the real world. We then fine-tune the
sensorimotor policy using these real-world trajectories to adapt it to the real
world dynamics. With less than 50 trajectories, our policy learns to rotate
more than ten pen-like objects with different physical properties for multiple
revolutions. We present a comprehensive analysis of our design choices and
share the lessons learned during development.
Source: http://arxiv.org/abs/2407.18902v1