Authors: Jonathan Lorraine, Safwan Hossain
Abstract: Neural networks are trained to learn an approximate mapping from an input
domain to a target domain. Incorporating prior knowledge about true mappings is
critical to learning a useful approximation. With current architectures, it is
challenging to enforce structure on the derivatives of the input-output
mapping. We propose to use a neural network to directly learn the Jacobian of
the input-output function, which allows easy control of the derivative. We
focus on structuring the derivative to allow invertibility and also demonstrate
that other useful priors, such as $k$-Lipschitz, can be enforced. Using this
approach, we can learn approximations to simple functions that are guaranteed
to be invertible and easily compute the inverse. We also show similar results
for 1-Lipschitz functions.
Source: http://arxiv.org/abs/2408.13237v1