Metadata only
Date
2020-09Type
- Report
ETH Bibliography
yes
Altmetrics
Abstract
We present a novel algorithmic approach and an error analysis leveraging Quasi-Monte Carlo points for training deep neural network (DNN) surrogates of Data-to-Observable (DtO) maps in engineering design. Our analysis reveals higher-order consistent, deterministic choices of training points in the input data space for deep and shallow Neural Networks with holomorphic activation functions such as tanh. These novel training points are proved to facilitate higher order decay (in terms of the number of training samples) of the underlying generalization error, with consistency error bounds that are free from the curse of dimensionality in the input data space, provided that DNN weights in hidden layers satisfy certain summability conditions. We present numerical experiments for DtO maps from elliptic and parabolic PDEs with uncertain inputs that confirm the theoretical analysis. Show more
Publication status
publishedExternal links
Journal / series
SAM Research ReportVolume
Publisher
Seminar for Applied Mathematics, ETH ZurichSubject
deep learning; higher-order QMC; generalization error; deep neural networks; scientific computingOrganisational unit
03851 - Mishra, Siddhartha / Mishra, Siddhartha
03435 - Schwab, Christoph / Schwab, Christoph
More
Show all metadata
ETH Bibliography
yes
Altmetrics