Skip to yearly menu bar Skip to main content


Poster

Improving Neural Additive Models with Bayesian Principles

Kouroche Bouchiat · Alexander Immer · Hugo Yèche · Gunnar Ratsch · Vincent Fortuin


Abstract:

Neural additive models (NAMs) enhance the transparency of deep neural networks by handling input features in separate additive sub-networks. However, they lack inherent mechanisms that provide calibrated uncertainties and enable selection of relevant features and interactions. Approaching NAMs from a Bayesian perspective, we augment them in three primary ways, namely by a) providing credible intervals for the individual additive sub-networks; b) estimating the marginal likelihood to perform an implicit selection of features via an empirical Bayes procedure; and c) facilitating the ranking of feature pairs as candidates for second-order interaction in fine-tuned models. In particular, we develop Laplace-approximated NAMs (LA-NAMs), which show improved empirical performance on tabular datasets and challenging real-world medical tasks.

Live content is unavailable. Log in and register to view live content