Error analysis for physics informed neural networks (PINNs) approximating Kolmogorov PDEs
Metadata only
Date
2021-06Type
- Report
ETH Bibliography
yes
Altmetrics
Abstract
Physics informed neural networks approximate solutions of PDEs by minimizing pointwise residuals. We derive rigorous bounds on the error, incurred by PINNs in approximating the solutions of a large class of linear parabolic PDEs, namely Kolmogorov equations that include the heat equation and Black-Scholes equation of option pricing, as examples. We construct neural networks, whose PINN residual (generalization error) can be made as small as desired. We also prove that the total L2-error can be bounded by the generalization error, which in turn is bounded in terms of the training error, provided that a sufficient number of randomly chosen training (collocation) points is used. Moreover, we prove that the size of the PINNs and the number of training samples only grow polynomially with the underlying dimension, enabling PINNs to overcome the curse of dimensionality in this context. These results enable us to provide a comprehensive error analysis for PINNs in approximating Kolmogorov PDEs. Show more
Publication status
publishedExternal links
Journal / series
SAM Research ReportVolume
Publisher
Seminar for Applied Mathematics, ETH ZurichSubject
Deep learning; Neural networks; PINNs; Kolmogonov PDEOrganisational unit
03851 - Mishra, Siddhartha / Mishra, Siddhartha
Related publications and datasets
Is previous version of: https://doi.org/10.3929/ethz-b-000583235
More
Show all metadata
ETH Bibliography
yes
Altmetrics