Authors: Youngjae Min, Anoopkumar Sonar, Navid Azizan
Abstract: Incorporating prior knowledge or specifications of input-output relationships
into machine learning models has gained significant attention, as it enhances
generalization from limited data and leads to conforming outputs. However, most
existing approaches use soft constraints by penalizing violations through
regularization, which offers no guarantee of constraint satisfaction — an
essential requirement in safety-critical applications. On the other hand,
imposing hard constraints on neural networks may hinder their representational
power, adversely affecting performance. To address this, we propose HardNet, a
practical framework for constructing neural networks that inherently satisfy
hard constraints without sacrificing model capacity. Specifically, we encode
affine and convex hard constraints, dependent on both inputs and outputs, by
appending a differentiable projection layer to the network’s output. This
architecture allows unconstrained optimization of the network parameters using
standard algorithms while ensuring constraint satisfaction by construction.
Furthermore, we show that HardNet retains the universal approximation
capabilities of neural networks. We demonstrate the versatility and
effectiveness of HardNet across various applications: fitting functions under
constraints, learning optimization solvers, optimizing control policies in
safety-critical systems, and learning safe decision logic for aircraft systems.
Source: http://arxiv.org/abs/2410.10807v1