Search
Results
-
Estimates on the generalization error of Physics Informed Neural Networks (PINNs) for approximating PDEs II: A class of inverse problems
(2020)SAM Research ReportPhysics informed neural networks (PINNs) have recently been very successfully applied for efficiently approximating inverse problems for PDEs. We focus on a particular class of inverse problems, the so-called data assimilation or unique continuation problems, and prove rigorous estimates on the generalization error of PINNs approximating them. An abstract framework is presented and conditional stability estimates for the underlying inverse ...Report -
On the approximation of rough functions with deep neural networks
(2020)SAM Research ReportDeep neural networks and the ENO procedure are both efficient frameworks for approximating rough functions. We prove that at any order, the ENO interpolation procedure can be cast as a deep ReLU neural network. This surprising fact enables the transfer of several desirable properties of the ENO procedure to deep neural networks, including its high-order accuracy at approximating Lipschitz functions. Numerical tests for the resulting neural ...Report -
On the conservation of energy in two-dimensional incompressible flows
(2020)SAM Research ReportWe prove the conservation of energy for weak and statistical solutions of the two-dimensional Euler equations, generated as strong (in an appropriate topology) limits of the underlying Navier-Stokes equations and a Monte Carlo-Spectral Viscosity numerical approximation, respectively. We characterize this conservation of energy in terms of a uniform decay of the so-called structure function, allowing us to extend existing results on energy ...Report -
Higher-order Quasi-Monte Carlo Training of Deep Neural Networks
(2020)SAM Research ReportWe present a novel algorithmic approach and an error analysis leveraging Quasi-Monte Carlo points for training deep neural network (DNN) surrogates of Data-to-Observable (DtO) maps in engineering design. Our analysis reveals higher-order consistent, deterministic choices of training points in the input data space for deep and shallow Neural Networks with holomorphic activation functions such as tanh. These novel training points are proved ...Report -
Iterative Surrogate Model Optimization (ISMO): An active learning algorithm for PDE constrained optimization with deep neural networks
(2020)SAM Research ReportWe present a novel active learning algorithm, termed a iiterative surrogate model optimization (ISMO), for robust and efficient numerical approximation of PDE constrained optimization problems. This algorithm is based on deep neural networks and its key feature is the iterative selection of training data through a feedback loop between deep neural networks and any underlying standard optimization algorithm. Under suitable hypotheses, we ...Report -
Coupled Oscillatory Recurrent Neural Network (coRNN): An accurate and (gradient) stable architecture for learning long time dependencies
(2020)SAM Research ReportCircuits of biological neurons, such as in the functional parts of the brain can be modeled as networks of coupled oscillators. Inspired by the ability of these systems to express a rich set of outputs while keeping (gradients of) state variables bounded, we propose a novel architecture for recurrent neural networks. Our proposed RNN is based on a time-discretization of a system of second-order ordinary differential equations, modeling ...Report -
Enhancing accuracy of deep learning algorithms by training with low-discrepancy sequences
(2020)SAM Research ReportWe propose a deep supervised learning algorithm based on low-discrepancy sequences as the training set. By a combination of theoretical arguments and extensive numerical experiments we demonstrate that the proposed algorithm significantly outperforms standard deep learning algorithms that are based on randomly chosen training data, for problems in moderately high dimensions. The proposed algorithm provides an efficient method for building ...Report -
Estimates on the generalization error of Physics Informed Neural Networks (PINNs) for approximating PDEs
(2020)SAM Research ReportPhysics informed neural networks (PINNs) have recently been widely used for robust and accurate approximation of PDEs. We provide rigorous upper bounds on the generalization error of PINNs approximating solutions of the forward problem for PDEs. An abstract formalism is introduced and stability properties of the underlying PDE are leveraged to derive an estimate for the generalization error in terms of the training error and number of ...Report -
Physics Informed Neural Networks for Simulating Radiative Transfer
(2020)SAM Research ReportWe propose a novel machine learning algorithm for simulating radiative transfer. Our algorithmis based on physics informed neural networks (PINNs), which are trained by minimizing the residualof the underlying radiative tranfer equations. We present extensive experiments and theoretical errorestimates to demonstrate that PINNs provide a very easy to implement, fast, robust and accuratemethod for simulating radiative transfer. We also ...Report