Backward explanations via redefinition of predicates

Authors: Léo Saulières, Martin C. Cooper, Florence Dupin de Saint Cyr

Abstract: History eXplanation based on Predicates (HXP), studies the behavior of a
Reinforcement Learning (RL) agent in a sequence of agent’s interactions with
the environment (a history), through the prism of an arbitrary predicate. To
this end, an action importance score is computed for each action in the
history. The explanation consists in displaying the most important actions to
the user. As the calculation of an action’s importance is #W[1]-hard, it is
necessary for long histories to approximate the scores, at the expense of their
quality. We therefore propose a new HXP method, called Backward-HXP, to provide
explanations for these histories without having to approximate scores.
Experiments show the ability of B-HXP to summarise long histories.

Source: http://arxiv.org/abs/2408.02606v1

About the Author

Leave a Reply

Your email address will not be published. Required fields are marked *

You may also like these