Linearized Laplace Inference in Neural Additive Models


METADATA ONLY
Loading...

Date

2023

Publication Type

Conference Paper

ETH Bibliography

yes

Citations

Altmetric
METADATA ONLY

Data

Rights / License

Abstract

Deep neural networks are highly effective but suffer from a lack of interpretability due to their black-box nature. Neural additive models (NAMs) solve this by separating into additive sub-networks, revealing the interactions between features and predictions. In this paper, we approach the NAM from a Bayesian perspective in order to quantify the uncertainty in the recovered interactions. Linearized Laplace approximation enables inference of these interactions directly in function space and yields a tractable estimate of the marginal likelihood, which can be used to perform implicit feature selection through an empirical Bayes procedure. Empirically, we show that Laplace-approximated NAMs (LA-NAM) are both more robust to noise and easier to interpret than their non-Bayesian counterpart for tabular regression and classification tasks.

Publication status

published

Editor

Book title

5th Symposium on Advances in Approximate Bayesian Inference (AABI 2023)

Journal / series

Volume

Pages / Article No.

Publisher

OpenReview

Event

5th Symposium on Advances in Approximate Bayesian Inference (AABI 2023)

Edition / version

Methods

Software

Geographic location

Date collected

Date created

Subject

Organisational unit

09568 - Rätsch, Gunnar / Rätsch, Gunnar check_circle

Notes

Funding

Related publications and datasets

item-metadata-list-element.reverse.undefined: