Leveraging Skills from Unlabeled Prior Data for Efficient Online Exploration

Authors: Max Wilcoxson, Qiyang Li, Kevin Frans, Sergey Levine

Abstract: Unsupervised pretraining has been transformative in many supervised domains.
However, applying such ideas to reinforcement learning (RL) presents a unique
challenge in that fine-tuning does not involve mimicking task-specific data,
but rather exploring and locating the solution through iterative
self-improvement. In this work, we study how unlabeled prior trajectory data
can be leveraged to learn efficient exploration strategies. While prior data
can be used to pretrain a set of low-level skills, or as additional off-policy
data for online RL, it has been unclear how to combine these ideas effectively
for online exploration. Our method SUPE (Skills from Unlabeled Prior data for
Exploration) demonstrates that a careful combination of these ideas compounds
their benefits. Our method first extracts low-level skills using a variational
autoencoder (VAE), and then pseudo-relabels unlabeled trajectories using an
optimistic reward model, transforming prior data into high-level, task-relevant
examples. Finally, SUPE uses these transformed examples as additional
off-policy data for online RL to learn a high-level policy that composes
pretrained low-level skills to explore efficiently. We empirically show that
SUPE reliably outperforms prior strategies, successfully solving a suite of
long-horizon, sparse-reward tasks. Code: https://github.com/rail-berkeley/supe.

Source: http://arxiv.org/abs/2410.18076v1

About the Author

Leave a Reply

Your email address will not be published. Required fields are marked *

You may also like these