Polynomial-Time Approximability of Constrained Reinforcement Learning

Authors: Jeremy McMahan

Abstract: We study the computational complexity of approximating general constrained
Markov decision processes. Our primary contribution is the design of a
polynomial time $(0,\epsilon)$-additive bicriteria approximation algorithm for
finding optimal constrained policies across a broad class of recursively
computable constraints, including almost-sure, chance, expectation, and their
anytime variants. Matching lower bounds imply our approximation guarantees are
optimal so long as $P \neq NP$. The generality of our approach results in
answers to several long-standing open complexity questions in the constrained
reinforcement learning literature. Specifically, we are the first to prove
polynomial-time approximability for the following settings: policies under
chance constraints, deterministic policies under multiple expectation
constraints, policies under non-homogeneous constraints (i.e., constraints of
different types), and policies under constraints for continuous-state
processes.

Source: http://arxiv.org/abs/2502.07764v1

About the Author

Leave a Reply

Your email address will not be published. Required fields are marked *

You may also like these