Tilted Quantile Gradient Updates for Quantile-Constrained Reinforcement Learning

Authors: Chenglin Li, Guangchun Ruan, Hua Geng

Abstract: Safe reinforcement learning (RL) is a popular and versatile paradigm to learn
reward-maximizing policies with safety guarantees. Previous works tend to
express the safety constraints in an expectation form due to the ease of
implementation, but this turns out to be ineffective in maintaining safety
constraints with high probability. To this end, we move to the
quantile-constrained RL that enables a higher level of safety without any
expectation-form approximations. We directly estimate the quantile gradients
through sampling and provide the theoretical proofs of convergence. Then a
tilted update strategy for quantile gradients is implemented to compensate the
asymmetric distributional density, with a direct benefit of return performance.
Experiments demonstrate that the proposed model fully meets safety requirements
(quantile constraints) while outperforming the state-of-the-art benchmarks with
higher return.

Source: http://arxiv.org/abs/2412.13184v1

About the Author

Leave a Reply

Your email address will not be published. Required fields are marked *

You may also like these