Search
Results
-
Graph-Coupled Oscillator Networks
(2022)SAM Research ReportWe propose Graph-Coupled Oscillator Networks (GraphCON), a novel framework for deep learning on graphs. It is based on discretizations of a second-order system of ordinary differential equations (ODEs), which model a network of nonlinear forced and damped oscillators, coupled via the adjacency structure of the underlying graph. The flexibility of our framework permits any basic GNN layer (e.g. convolutional or attentional) as the coupling ...Report -
Weak physics informed neural networks for approximating entropy solutions of hyperbolic conservation laws
(2022)SAM Research ReportPhysics informed neural networks (PINNs) require regularity of solutions of the underlying PDE to guarantee accurate approximation. Consequently, they may fail at approximating discontinuous solutions of PDEs such as nonlinear hyperbolic equations. To ameliorate this, we propose a novel variant of PINNs, termed as weak PINNs (wPINNs) for accurate approximation of entropy solutions of scalar conservation laws. wPINNs are based on approximating ...Report -
Nonlinear Reconstruction for Operator Learning of PDEs with Discontinuities
(2022)SAM Research ReportA large class of hyperbolic and advection-dominated PDEs can have solutions with discontinuities. This paper investigates, both theoretically and empirically, the operator learning of PDEs with discontinuous solutions. We rigorously prove, in terms of lower approximation bounds, that methods which entail a linear reconstruction step (e.g. DeepONet or PCA-Net) fail to efficiently approximate the solution operator of such PDEs. In contrast, ...Report -
Error estimates for physics informed neural networks approximating the Navier-Stokes equations
(2022)SAM Research ReportWe prove rigorous bounds on the errors resulting from the approximation of the incompressible Navier-Stokes equations with (extended) physics informed neural networks. We show that the underlying PDE residual can be made arbitrarily small for tanh neural networks with two hidden layers. Moreover, the total error can be estimated in terms of the training error, network size and number of quadrature points. The theory is illustrated with ...Report -
On the discrete equation model for compressible multiphase fluid flows
(2022)SAM Research ReportThe modeling of multi-phase flow is very challenging, given the range of scales as well as the diversity of flow regimes that one encounters in this context. We revisit the discrete equation method (DEM) for two-phase flow in the absence of heat conduction and mass transfer. We analyze the resulting probability coefficients and prove their local convexity, rigorously establishing that our version of DEM can model different flow regimes ...Report -
Agnostic Physics-Driven Deep Learning
(2022)SAM Research ReportThis work establishes that a physical system can perform statistical learning without gradient computations, via an \emph{Agnostic Equilibrium Propagation} (AEqprop) procedure that combines energy minimization, homeostatic control, and nudging towards the correct response. In AEqprop, the specifics of the system do not have to be known: the procedure is based only on external manipulations, and produces a stochastic gradient descent without ...Report -
Variable-Input Deep Operator Networks
(2022)SAM Research ReportExisting architectures for operator learning require that the number and locations of sensors (where the input functions are evaluated) remain the same across all training and test samples, significantly restricting the range of their applicability. We address this issue by proposing a novel operator learning framework, termed Variable-Input Deep Operator Network (VIDON), which allows for random sensors whose number and locations can vary ...Report -
Generic bounds on the approximation error for physics-informed (and) operator learning
(2022)SAM Research ReportWe propose a very general framework for deriving rigorous bounds on the approximation error for physics-informed neural networks (PINNs) and operator learning architectures such as DeepONets and FNOs as well as for physics-informed operator learning. These bounds guarantee that PINNs and (physics-informed) DeepONets or FNOs will efficiently approximate the underlying solution or solution operator of generic partial differential equations ...Report -
Gradient Gating for Deep Multi-Rate Learning on Graphs
(2022)SAM Research ReportWe present Gradient Gating (G2), a novel framework for improving the performance of Graph Neural Networks (GNNs). Our framework is based on gating the output of GNN layers with a mechanism for multi-rate flow of message passing information across nodes of the underlying graph. Local gradients are harnessed to further modulate message passing updates. Our framework flexibly allows one to use any basic GNN layer as a wrapper around which ...Report -
Error analysis for deep neural network approximations of parametric hyperbolic conservation laws
(2022)SAM Research ReportWe derive rigorous bounds on the error resulting from the approximation of the solution of parametric hyperbolic scalar conservation laws with ReLU neural networks. We show that the approximation error can be made as small as desired with ReLU neural networks that overcome the curse of dimensionality. In addition, we provide an explicit upper bound on the generalization error in terms of the training error, number of training samples and ...Report