Search
Results
-
On generalization error estimates of physics informed neural networks for approximating dispersive PDEs
(2021)SAM Research ReportPhysics informed neural networks (PINNs) have recently been widely used for robust and accurate approximation of PDEs. We provide rigorous upper bounds on the generalization error of PINNs approximating solutions of the forward problem for several dispersive PDEs.Report -
Coupled Oscillatory Recurrent Neural Network (coRNN): An accurate and (gradient) stable architecture for learning long time dependencies
(2020)SAM Research ReportCircuits of biological neurons, such as in the functional parts of the brain can be modeled as networks of coupled oscillators. Inspired by the ability of these systems to express a rich set of outputs while keeping (gradients of) state variables bounded, we propose a novel architecture for recurrent neural networks. Our proposed RNN is based on a time-discretization of a system of second-order ordinary differential equations, modeling ...Report -
Physics Informed Neural Networks for Simulating Radiative Transfer
(2020)SAM Research ReportWe propose a novel machine learning algorithm for simulating radiative transfer. Our algorithmis based on physics informed neural networks (PINNs), which are trained by minimizing the residualof the underlying radiative tranfer equations. We present extensive experiments and theoretical errorestimates to demonstrate that PINNs provide a very easy to implement, fast, robust and accuratemethod for simulating radiative transfer. We also ...Report -
Enhancing accuracy of deep learning algorithms by training with low-discrepancy sequences
(2020)SAM Research ReportWe propose a deep supervised learning algorithm based on low-discrepancy sequences as the training set. By a combination of theoretical arguments and extensive numerical experiments we demonstrate that the proposed algorithm significantly outperforms standard deep learning algorithms that are based on randomly chosen training data, for problems in moderately high dimensions. The proposed algorithm provides an efficient method for building ...Report -
Estimates on the generalization error of Physics Informed Neural Networks (PINNs) for approximating PDEs
(2020)SAM Research ReportPhysics informed neural networks (PINNs) have recently been widely used for robust and accurate approximation of PDEs. We provide rigorous upper bounds on the generalization error of PINNs approximating solutions of the forward problem for PDEs. An abstract formalism is introduced and stability properties of the underlying PDE are leveraged to derive an estimate for the generalization error in terms of the training error and number of ...Report -
Higher-order Quasi-Monte Carlo Training of Deep Neural Networks
(2020)SAM Research ReportWe present a novel algorithmic approach and an error analysis leveraging Quasi-Monte Carlo points for training deep neural network (DNN) surrogates of Data-to-Observable (DtO) maps in engineering design. Our analysis reveals higher-order consistent, deterministic choices of training points in the input data space for deep and shallow Neural Networks with holomorphic activation functions such as tanh. These novel training points are proved ...Report -
Iterative Surrogate Model Optimization (ISMO): An active learning algorithm for PDE constrained optimization with deep neural networks
(2020)SAM Research ReportWe present a novel active learning algorithm, termed a iiterative surrogate model optimization (ISMO), for robust and efficient numerical approximation of PDE constrained optimization problems. This algorithm is based on deep neural networks and its key feature is the iterative selection of training data through a feedback loop between deep neural networks and any underlying standard optimization algorithm. Under suitable hypotheses, we ...Report -
On the well-posedness of Bayesian inversion for PDEs with ill-posed forward problems
(2021)SAM Research ReportWe study the well-posedness of Bayesian inverse problems for PDEs, for which the underlying forward problem may be ill-posed. Such PDEs, which include the fundamental equations of fluid dynamics, are characterized by the lack of rigorous global existence and stability results as well as possible non-convergence of numerical approximations. Under very general hypotheses on approximations to these PDEs, we prove that the posterior measure, ...Report -
Well-posedness of Bayesian inverse problems for hyperbolic conservation laws
(2021)SAM Research ReportWe study the well-posedness of the Bayesian inverse problem for scalar hyperbolic conservation laws where the statistical information about inputs such as the initial datum and (possibly discontinuous) flux function are inferred from noisy measurements. In particular, the Lipschitz continuity of the measurement to posterior map as well as the stability of the posterior to approximations, are established with respect to the Wasserstein ...Report -
On universal approximation and error bounds for Fourier Neural Operators
(2021)SAM Research ReportFourier neural operators (FNOs) have recently been proposed as an effective framework for learning operators that map between infinite-dimensional spaces. We prove that FNOs are universal, in the sense that they can approximate any continuous operator to desired accuracy. Moreover, we suggest a mechanism by which FNOs can approximate operators associated with PDEs efficiently. Explicit error bounds are derived to show that the size of the ...Report