Scalable Marginal Likelihood Estimation for Model Selection in Deep Learning
METADATA ONLY
Loading...
Author / Producer
Date
2021-06-15
Publication Type
Working Paper
ETH Bibliography
yes
Citations
Altmetric
METADATA ONLY
Data
Rights / License
Abstract
Marginal-likelihood based model-selection, even though promising, is rarely used in deep learning due to estimation difficulties. Instead, most approaches rely on validation data, which may not be readily available. In this work, we present a scalable marginal-likelihood estimation method to select both hyperparameters and network architectures, based on the training data alone. Some hyperparameters can be estimated online during training, simplifying the procedure. Our marginal-likelihood estimate is based on Laplace's method and Gauss-Newton approximations to the Hessian, and it outperforms cross-validation and manual-tuning on standard regression and image classification datasets, especially in terms of calibration and out-of-distribution detection. Our work shows that marginal likelihoods can improve generalization and be useful when validation data is unavailable (e.g., in nonstationary settings).
Permanent link
Publication status
published
External links
Editor
Book title
Journal / series
arXiv
Volume
Pages / Article No.
2104.04975
Publisher
Cornell University
Event
Edition / version
Methods
Software
Geographic location
Date collected
Date created
Subject
Organisational unit
09568 - Rätsch, Gunnar / Rätsch, Gunnar
Notes
Funding
Related publications and datasets
Is previous version of: