Abstract

A system and method are described related to the interpretability of predictive models, such as functional neural networks (FNNs), that can operate on continuous data. For some FNNs, certain explainability techniques may be less effective as they can involve discretizing an input signal, which may result in a loss of functional information. The disclosed technology can integrate custom, differentiable functional layers, such as basis expansion or inner product layers, within the neural network architecture. This design can preserve a differentiable path from a model's output to the original functional input, which may enable the application of gradient-based attribution methods. This process can generate a functional importance curve, a data structure that can quantitatively illustrate segments of an input function that contributed to a specific prediction.

Creative Commons License

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.

Share

COinS