Maximum Entropy On-Policy Actor-Critic via Entropy Advantage Estimation

Authors: Jean Seong Bjorn Choe, Jong-Kook Kim

Abstract: Entropy Regularisation is a widely adopted technique that enhances policy
optimisation performance and stability. A notable form of entropy
regularisation is augmenting the objective with an entropy term, thereby
simultaneously optimising the expected return and the entropy. This framework,
known as maximum entropy reinforcement learning (MaxEnt RL), has shown
theoretical and empirical successes. However, its practical application in
straightforward on-policy actor-critic settings remains surprisingly
underexplored. We hypothesise that this is due to the difficulty of managing
the entropy reward in practice. This paper proposes a simple method of
separating the entropy objective from the MaxEnt RL objective, which
facilitates the implementation of MaxEnt RL in on-policy settings. Our
empirical evaluations demonstrate that extending Proximal Policy Optimisation
(PPO) and Trust Region Policy Optimisation (TRPO) within the MaxEnt framework
improves policy optimisation performance in both MuJoCo and Procgen tasks.
Additionally, our results highlight MaxEnt RL’s capacity to enhance
generalisation.

Source: http://arxiv.org/abs/2407.18143v1

About the Author

Leave a Reply

Your email address will not be published. Required fields are marked *

You may also like these