Authors: Bhavya Sukhija, Stelian Coros, Andreas Krause, Pieter Abbeel, Carmelo Sferrazza
Abstract: Reinforcement learning (RL) algorithms aim to balance exploiting the current
best strategy with exploring new options that could lead to higher rewards.
Most common RL algorithms use undirected exploration, i.e., select random
sequences of actions. Exploration can also be directed using intrinsic rewards,
such as curiosity or model epistemic uncertainty. However, effectively
balancing task and intrinsic rewards is challenging and often task-dependent.
In this work, we introduce a framework, MaxInfoRL, for balancing intrinsic and
extrinsic exploration. MaxInfoRL steers exploration towards informative
transitions, by maximizing intrinsic rewards such as the information gain about
the underlying task. When combined with Boltzmann exploration, this approach
naturally trades off maximization of the value function with that of the
entropy over states, rewards, and actions. We show that our approach achieves
sublinear regret in the simplified setting of multi-armed bandits. We then
apply this general formulation to a variety of off-policy model-free RL methods
for continuous state-action spaces, yielding novel algorithms that achieve
superior performance across hard exploration problems and complex scenarios
such as visual control tasks.
Source: http://arxiv.org/abs/2412.12098v1