Search
Results
-
On generalization error estimates of physics informed neural networks for approximating dispersive PDEs
(2021)SAM Research ReportPhysics informed neural networks (PINNs) have recently been widely used for robust and accurate approximation of PDEs. We provide rigorous upper bounds on the generalization error of PINNs approximating solutions of the forward problem for several dispersive PDEs.Report -
On universal approximation and error bounds for Fourier Neural Operators
(2021)SAM Research ReportFourier neural operators (FNOs) have recently been proposed as an effective framework for learning operators that map between infinite-dimensional spaces. We prove that FNOs are universal, in the sense that they can approximate any continuous operator to desired accuracy. Moreover, we suggest a mechanism by which FNOs can approximate operators associated with PDEs efficiently. Explicit error bounds are derived to show that the size of the ...Report -
On the well-posedness of Bayesian inversion for PDEs with ill-posed forward problems
(2021)SAM Research ReportWe study the well-posedness of Bayesian inverse problems for PDEs, for which the underlying forward problem may be ill-posed. Such PDEs, which include the fundamental equations of fluid dynamics, are characterized by the lack of rigorous global existence and stability results as well as possible non-convergence of numerical approximations. Under very general hypotheses on approximations to these PDEs, we prove that the posterior measure, ...Report -
Well-posedness of Bayesian inverse problems for hyperbolic conservation laws
(2021)SAM Research ReportWe study the well-posedness of the Bayesian inverse problem for scalar hyperbolic conservation laws where the statistical information about inputs such as the initial datum and (possibly discontinuous) flux function are inferred from noisy measurements. In particular, the Lipschitz continuity of the measurement to posterior map as well as the stability of the posterior to approximations, are established with respect to the Wasserstein ...Report -
On the vanishing viscosity limit of statistical solutions of the incompressible Navier-Stokes equations
(2021)SAM Research ReportWe study statistical solutions of the incompressible Navier--Stokes equation and their vanishing viscosity limit. We show that a formulation using correlation measures, which are probability measures accounting for spatial correlations, and moment equations is equivalent to statistical solutions in the Foiac{s}--Prodi sense. Under the assumption of weak scaling, a weaker version of Kolmogorov's self-similarity at small scales hypothesis ...Report -
Long Expressive Memory for Sequence Modeling
(2021)SAM Research ReportWe propose a novel method called Long Expressive Memory (LEM) for learning long-term sequential dependencies. LEM is gradient-based, it can efficiently process sequential tasks with very long-term dependencies, and it is sufficiently expressive to be able to learn complicated input-output maps. To derive LEM, we consider a system of multiscale ordinary differential equations, as well as a suitable time-discretization of this system. For ...Report -
Graph-Coupled Oscillator Networks
(2022)SAM Research ReportWe propose Graph-Coupled Oscillator Networks (GraphCON), a novel framework for deep learning on graphs. It is based on discretizations of a second-order system of ordinary differential equations (ODEs), which model a network of nonlinear forced and damped oscillators, coupled via the adjacency structure of the underlying graph. The flexibility of our framework permits any basic GNN layer (e.g. convolutional or attentional) as the coupling ...Report -
Weak physics informed neural networks for approximating entropy solutions of hyperbolic conservation laws
(2022)SAM Research ReportPhysics informed neural networks (PINNs) require regularity of solutions of the underlying PDE to guarantee accurate approximation. Consequently, they may fail at approximating discontinuous solutions of PDEs such as nonlinear hyperbolic equations. To ameliorate this, we propose a novel variant of PINNs, termed as weak PINNs (wPINNs) for accurate approximation of entropy solutions of scalar conservation laws. wPINNs are based on approximating ...Report -
On the approximation of functions by tanh neural networks
(2021)SAM Research ReportWe derive bounds on the error, in high-order Sobolev norms, incurred in the approximation of Sobolev-regular as well as analytic functions by neural networks with the hyperbolic tangent activation function. These bounds provide explicit estimates on the approximation error with respect to the size of the neural networks. We show that tanh neural networks with only two hidden layers suffice to approximate functions at comparable or better ...Report -
An operator preconditioning perspective on training in physics-informed machine learning
(2023)SAM Research ReportIn this paper, we investigate the behavior of gradient descent algorithms in physics-informed machine learning methods like PINNs, which minimize resid uals connected to partial differential equations (PDEs). Our key result is that the difficulty in training these models is closely related to the conditioning of a specific differential operator. This operator, in turn, is associated to the Hermi tian square of the differential operator ...Report