Search
Results
-
Error analysis for deep neural network approximations of parametric hyperbolic conservation laws
(2022)SAM Research ReportWe derive rigorous bounds on the error resulting from the approximation of the solution of parametric hyperbolic scalar conservation laws with ReLU neural networks. We show that the approximation error can be made as small as desired with ReLU neural networks that overcome the curse of dimensionality. In addition, we provide an explicit upper bound on the generalization error in terms of the training error, number of training samples and ...Report -
Neural Inverse Operators for Solving PDE Inverse Problems
(2023)SAM Research ReportA large class of inverse problems for PDEs are only well-defined as mappings from operators to functions. Existing operator learning frameworks map functions to functions and need to be modified to learn inverse maps from data. We propose a novel architecture termed Neural Inverse Operators (NIOs) to solve these PDE inverse problems. Motivated by the underlying mathematical structure, NIO is based on a suitable composition of DeepONets ...Report -
Convolutional Neural Operators
(2023)SAM Research ReportAlthough very successfully used in machine learning, convolution based neural network architectures -- believed to be inconsistent in function space -- have been largely ignored in the context of learning solution operators of PDEs. Here, we adapt convolutional neural networks to demonstrate that they are indeed able to process functions as inputs and outputs. The resulting architecture, termed as convolutional neural operators (CNOs), ...Report -
Are Neural Operators Really Neural Operators? Frame Theory Meets Operator Learning
(2023)SAM Research ReportRecently, there has been significant interest in operator learning, i.e. learning mappings between infinite-dimensional function spaces. This has been particularly relevant in the context of learning partial differential equations from data. However, it has been observed that proposed models may not behave as operators when implemented on a computer, questioning the very essence of what operator learning should be. We contend that in ...Report -
Multilevel domain decomposition-based architectures for physics-informed neural networks
(2023)SAM Research ReportPhysics-informed neural networks (PINNs) are a popular and powerful approach for solving problems involving differential equations, yet they often struggle to solve problems with high frequency and/or multi-scale solutions. Finite basis physics-informed neural networks (FBPINNs) improve the performance of PINNs in this regime by combining them with an overlapping domain decomposition approach. In this paper, the FBPINN approach is extended ...Report -
Numerical analysis of physics-informed neural networks and related models in physics-informed machine learning
(2024)SAM Research ReportPhysics-informed neural networks (PINNs) and their variants have been very popular in recent years as algorithms for the numerical simulation of both forward and inverse problems for partial differential equations. This article aims to provide a comprehensive review of currently available results on the numerical analysis of PINNs and related models that constitute the backbone of physics-informed machine learning. We provide a unified ...Report -
Weak physics informed neural networks for approximating entropy solutions of hyperbolic conservation laws
(2022)SAM Research ReportPhysics informed neural networks (PINNs) require regularity of solutions of the underlying PDE to guarantee accurate approximation. Consequently, they may fail at approximating discontinuous solutions of PDEs such as nonlinear hyperbolic equations. To ameliorate this, we propose a novel variant of PINNs, termed as weak PINNs (wPINNs) for accurate approximation of entropy solutions of scalar conservation laws. wPINNs are based on approximating ...Report -
Convolutional Neural Operators for robust and accurate learning of PDEs
(2023)SAM Research ReportAlthough very successfully used in conventional machine learning, convolution based neural network architectures -- believed to be inconsistent in function space -- have been largely ignored in the context of learning solution operators of PDEs. Here, we present novel adaptations for convolutional neural networks to demonstrate that they are indeed able to process functions as inputs and outputs. The resulting architecture, termed as ...Report -
On the vanishing viscosity limit of statistical solutions of the incompressible Navier-Stokes equations
(2021)SAM Research ReportWe study statistical solutions of the incompressible Navier--Stokes equation and their vanishing viscosity limit. We show that a formulation using correlation measures, which are probability measures accounting for spatial correlations, and moment equations is equivalent to statistical solutions in the Foiac{s}--Prodi sense. Under the assumption of weak scaling, a weaker version of Kolmogorov's self-similarity at small scales hypothesis ...Report -
Neural Oscillators are Universal
(2023)SAM Research ReportCoupled oscillators are being increasingly used as the basis of machine learning (ML) architectures, for instance in sequence modeling, graph representation learning and in physical neural networks that are used in analog ML devices. We introduce an abstract class of neural oscillators that encompasses these architectures and prove that neural oscillators are universal, i.e, they can approximate any continuous and casual operator mapping ...Report