Search
Results
-
A Monte-Carlo ab-initio algorithm for the multiscale simulation of compressible multiphase flows
(2023)SAM Research ReportWe propose a novel Monte-Carlo based ab-initio algorithm for directly computing the statistics for quantities of interest in an immiscible two-phase compressible flow. Our algorithm samples the underlying probability space and evolves these samples with a sharp interface front-tracking scheme. Consequently, statistical information is generated without resorting to any closure assumptions and information about the underlying microstructure ...Report -
A Survey on Oversmoothing in Graph Neural Networks
(2023)SAM Research ReportNode features of graph neural networks (GNNs) tend to become more similar with the increase of the network depth. This effect is known as over-smoothing, which we axiomatically define as the exponential convergence of suitable similarity measures on the node features. Our definition unifies previous approaches and gives rise to new quantitative measures of over-smoothing. Moreover, we empirically demonstrate this behavior for several ...Report -
Multi-Scale Message Passing Neural PDE Solvers
(2023)SAM Research ReportWe propose a novel multi-scale message passing neural network algorithm for learning the solutions of time-dependent PDEs. Our algorithm possesses both temporal and spatial multi-scale resolution features by incorporating multi-scale sequence models and graph gating modules in the encoder and processor, respectively. Benchmark numerical experiments are presented to demonstrate that the proposed algorithm outperforms baselines, particularly ...Report -
Convolutional Neural Operators
(2023)SAM Research ReportAlthough very successfully used in machine learning, convolution based neural network architectures -- believed to be inconsistent in function space -- have been largely ignored in the context of learning solution operators of PDEs. Here, we adapt convolutional neural networks to demonstrate that they are indeed able to process functions as inputs and outputs. The resulting architecture, termed as convolutional neural operators (CNOs), ...Report -
Neural Inverse Operators for Solving PDE Inverse Problems
(2023)SAM Research ReportA large class of inverse problems for PDEs are only well-defined as mappings from operators to functions. Existing operator learning frameworks map functions to functions and need to be modified to learn inverse maps from data. We propose a novel architecture termed Neural Inverse Operators (NIOs) to solve these PDE inverse problems. Motivated by the underlying mathematical structure, NIO is based on a suitable composition of DeepONets ...Report -
Nonlinear Reconstruction for Operator Learning of PDEs with Discontinuities
(2022)SAM Research ReportA large class of hyperbolic and advection-dominated PDEs can have solutions with discontinuities. This paper investigates, both theoretically and empirically, the operator learning of PDEs with discontinuous solutions. We rigorously prove, in terms of lower approximation bounds, that methods which entail a linear reconstruction step (e.g. DeepONet or PCA-Net) fail to efficiently approximate the solution operator of such PDEs. In contrast, ...Report -
Gradient Gating for Deep Multi-Rate Learning on Graphs
(2022)SAM Research ReportWe present Gradient Gating (G2), a novel framework for improving the performance of Graph Neural Networks (GNNs). Our framework is based on gating the output of GNN layers with a mechanism for multi-rate flow of message passing information across nodes of the underlying graph. Local gradients are harnessed to further modulate message passing updates. Our framework flexibly allows one to use any basic GNN layer as a wrapper around which ...Report -
Weak physics informed neural networks for approximating entropy solutions of hyperbolic conservation laws
(2022)SAM Research ReportPhysics informed neural networks (PINNs) require regularity of solutions of the underlying PDE to guarantee accurate approximation. Consequently, they may fail at approximating discontinuous solutions of PDEs such as nonlinear hyperbolic equations. To ameliorate this, we propose a novel variant of PINNs, termed as weak PINNs (wPINNs) for accurate approximation of entropy solutions of scalar conservation laws. wPINNs are based on approximating ...Report -
Error analysis for deep neural network approximations of parametric hyperbolic conservation laws
(2022)SAM Research ReportWe derive rigorous bounds on the error resulting from the approximation of the solution of parametric hyperbolic scalar conservation laws with ReLU neural networks. We show that the approximation error can be made as small as desired with ReLU neural networks that overcome the curse of dimensionality. In addition, we provide an explicit upper bound on the generalization error in terms of the training error, number of training samples and ...Report -
Agnostic Physics-Driven Deep Learning
(2022)SAM Research ReportThis work establishes that a physical system can perform statistical learning without gradient computations, via an \emph{Agnostic Equilibrium Propagation} (AEqprop) procedure that combines energy minimization, homeostatic control, and nudging towards the correct response. In AEqprop, the specifics of the system do not have to be known: the procedure is based only on external manipulations, and produces a stochastic gradient descent without ...Report