Authors: Mara Levy, Siddhant Haldar, Lerrel Pinto, Abhinav Shirivastava
Abstract: Developing generalizable robot policies that can robustly handle varied
environmental conditions and object instances remains a fundamental challenge
in robot learning. While considerable efforts have focused on collecting large
robot datasets and developing policy architectures to learn from such data,
naively learning from visual inputs often results in brittle policies that fail
to transfer beyond the training data. This work presents Prescriptive Point
Priors for Policies or P3-PO, a novel framework that constructs a unique state
representation of the environment leveraging recent advances in computer vision
and robot learning to achieve improved out-of-distribution generalization for
robot manipulation. This representation is obtained through two steps. First, a
human annotator prescribes a set of semantically meaningful points on a single
demonstration frame. These points are then propagated through the dataset using
off-the-shelf vision models. The derived points serve as an input to
state-of-the-art policy architectures for policy learning. Our experiments
across four real-world tasks demonstrate an overall 43% absolute improvement
over prior methods when evaluated in identical settings as training. Further,
P3-PO exhibits 58% and 80% gains across tasks for new object instances and more
cluttered environments respectively. Videos illustrating the robot’s
performance are best viewed at point-priors.github.io.
Source: http://arxiv.org/abs/2412.06784v1