Metadata only
Date
2024-02-05Type
- Working Paper
ETH Bibliography
yes
Altmetrics
Abstract
Neural additive models (NAMs) can improve the interpretability of deep neural networks by handling input features in separate additive sub-networks. However, they lack inherent mechanisms that provide calibrated uncertainties and enable selection of relevant features and interactions. Approaching NAMs from a Bayesian perspective, we enhance them in three primary ways, namely by a) providing credible intervals for the individual additive sub-networks; b) estimating the marginal likelihood to perform an implicit selection of features via an empirical Bayes procedure; and c) enabling a ranking of feature pairs as candidates for second-order interaction in fine-tuned models. In particular, we develop Laplace-approximated NAMs (LA-NAMs), which show improved empirical performance on tabular datasets and challenging real-world medical tasks. Show more
Publication status
publishedJournal / series
arXivPages / Article No.
Publisher
Cornell UniversityEdition / version
v3Subject
Machine Learning (stat.ML); Machine Learning (cs.LG); FOS: Computer and information sciencesOrganisational unit
09568 - Rätsch, Gunnar / Rätsch, Gunnar
Related publications and datasets
Is previous version of: http://hdl.handle.net/20.500.11850/725887
More
Show all metadata
ETH Bibliography
yes
Altmetrics