Search
Results
-
Variable-Input Deep Operator Networks
(2022)SAM Research ReportExisting architectures for operator learning require that the number and locations of sensors (where the input functions are evaluated) remain the same across all training and test samples, significantly restricting the range of their applicability. We address this issue by proposing a novel operator learning framework, termed Variable-Input Deep Operator Network (VIDON), which allows for random sensors whose number and locations can vary ...Report -
Generic bounds on the approximation error for physics-informed (and) operator learning
(2022)SAM Research ReportWe propose a very general framework for deriving rigorous bounds on the approximation error for physics-informed neural networks (PINNs) and operator learning architectures such as DeepONets and FNOs as well as for physics-informed operator learning. These bounds guarantee that PINNs and (physics-informed) DeepONets or FNOs will efficiently approximate the underlying solution or solution operator of generic partial differential equations ...Report -
On the discrete equation model for compressible multiphase fluid flows
(2022)SAM Research ReportThe modeling of multi-phase flow is very challenging, given the range of scales as well as the diversity of flow regimes that one encounters in this context. We revisit the discrete equation method (DEM) for two-phase flow in the absence of heat conduction and mass transfer. We analyze the resulting probability coefficients and prove their local convexity, rigorously establishing that our version of DEM can model different flow regimes ...Report -
Error estimates for physics informed neural networks approximating the Navier-Stokes equations
(2022)SAM Research ReportWe prove rigorous bounds on the errors resulting from the approximation of the incompressible Navier-Stokes equations with (extended) physics informed neural networks. We show that the underlying PDE residual can be made arbitrarily small for tanh neural networks with two hidden layers. Moreover, the total error can be estimated in terms of the training error, network size and number of quadrature points. The theory is illustrated with ...Report -
Graph-Coupled Oscillator Networks
(2022)SAM Research ReportWe propose Graph-Coupled Oscillator Networks (GraphCON), a novel framework for deep learning on graphs. It is based on discretizations of a second-order system of ordinary differential equations (ODEs), which model a network of nonlinear forced and damped oscillators, coupled via the adjacency structure of the underlying graph. The flexibility of our framework permits any basic GNN layer (e.g. convolutional or attentional) as the coupling ...Report -
On the vanishing viscosity limit of statistical solutions of the incompressible Navier-Stokes equations
(2021)SAM Research ReportWe study statistical solutions of the incompressible Navier--Stokes equation and their vanishing viscosity limit. We show that a formulation using correlation measures, which are probability measures accounting for spatial correlations, and moment equations is equivalent to statistical solutions in the Foiac{s}--Prodi sense. Under the assumption of weak scaling, a weaker version of Kolmogorov's self-similarity at small scales hypothesis ...Report -
Long Expressive Memory for Sequence Modeling
(2021)SAM Research ReportWe propose a novel method called Long Expressive Memory (LEM) for learning long-term sequential dependencies. LEM is gradient-based, it can efficiently process sequential tasks with very long-term dependencies, and it is sufficiently expressive to be able to learn complicated input-output maps. To derive LEM, we consider a system of multiscale ordinary differential equations, as well as a suitable time-discretization of this system. For ...Report -
On universal approximation and error bounds for Fourier Neural Operators
(2021)SAM Research ReportFourier neural operators (FNOs) have recently been proposed as an effective framework for learning operators that map between infinite-dimensional spaces. We prove that FNOs are universal, in the sense that they can approximate any continuous operator to desired accuracy. Moreover, we suggest a mechanism by which FNOs can approximate operators associated with PDEs efficiently. Explicit error bounds are derived to show that the size of the ...Report -
On the well-posedness of Bayesian inversion for PDEs with ill-posed forward problems
(2021)SAM Research ReportWe study the well-posedness of Bayesian inverse problems for PDEs, for which the underlying forward problem may be ill-posed. Such PDEs, which include the fundamental equations of fluid dynamics, are characterized by the lack of rigorous global existence and stability results as well as possible non-convergence of numerical approximations. Under very general hypotheses on approximations to these PDEs, we prove that the posterior measure, ...Report -
Well-posedness of Bayesian inverse problems for hyperbolic conservation laws
(2021)SAM Research ReportWe study the well-posedness of the Bayesian inverse problem for scalar hyperbolic conservation laws where the statistical information about inputs such as the initial datum and (possibly discontinuous) flux function are inferred from noisy measurements. In particular, the Lipschitz continuity of the measurement to posterior map as well as the stability of the posterior to approximations, are established with respect to the Wasserstein ...Report