Search
Results
-
A Monte-Carlo ab-initio algorithm for the multiscale simulation of compressible multiphase flows
(2023)SAM Research ReportWe propose a novel Monte-Carlo based ab-initio algorithm for directly computing the statistics for quantities of interest in an immiscible two-phase compressible flow. Our algorithm samples the underlying probability space and evolves these samples with a sharp interface front-tracking scheme. Consequently, statistical information is generated without resorting to any closure assumptions and information about the underlying microstructure ...Report -
A Survey on Oversmoothing in Graph Neural Networks
(2023)SAM Research ReportNode features of graph neural networks (GNNs) tend to become more similar with the increase of the network depth. This effect is known as over-smoothing, which we axiomatically define as the exponential convergence of suitable similarity measures on the node features. Our definition unifies previous approaches and gives rise to new quantitative measures of over-smoothing. Moreover, we empirically demonstrate this behavior for several ...Report -
Nonlinear Reconstruction for Operator Learning of PDEs with Discontinuities
(2022)SAM Research ReportA large class of hyperbolic and advection-dominated PDEs can have solutions with discontinuities. This paper investigates, both theoretically and empirically, the operator learning of PDEs with discontinuous solutions. We rigorously prove, in terms of lower approximation bounds, that methods which entail a linear reconstruction step (e.g. DeepONet or PCA-Net) fail to efficiently approximate the solution operator of such PDEs. In contrast, ...Report -
Estimates on the generalization error of Physics Informed Neural Networks (PINNs) for approximating PDEs II: A class of inverse problems
(2020)SAM Research ReportPhysics informed neural networks (PINNs) have recently been very successfully applied for efficiently approximating inverse problems for PDEs. We focus on a particular class of inverse problems, the so-called data assimilation or unique continuation problems, and prove rigorous estimates on the generalization error of PINNs approximating them. An abstract framework is presented and conditional stability estimates for the underlying inverse ...Report -
On the approximation of rough functions with deep neural networks
(2020)SAM Research ReportDeep neural networks and the ENO procedure are both efficient frameworks for approximating rough functions. We prove that at any order, the ENO interpolation procedure can be cast as a deep ReLU neural network. This surprising fact enables the transfer of several desirable properties of the ENO procedure to deep neural networks, including its high-order accuracy at approximating Lipschitz functions. Numerical tests for the resulting neural ...Report -
On the conservation of energy in two-dimensional incompressible flows
(2020)SAM Research ReportWe prove the conservation of energy for weak and statistical solutions of the two-dimensional Euler equations, generated as strong (in an appropriate topology) limits of the underlying Navier-Stokes equations and a Monte Carlo-Spectral Viscosity numerical approximation, respectively. We characterize this conservation of energy in terms of a uniform decay of the so-called structure function, allowing us to extend existing results on energy ...Report -
Error estimates for DeepOnets: A deep learning framework in infinite dimensions
(2021)SAM Research ReportDeepOnets have recently been proposed as a framework for learning nonlinear operators mapping between infinite dimensional Banach spaces. We analyze DeepOnets and prove estimates on the resulting approximation and generalization errors. In particular, we extend the universal approximation property of DeepOnets to include measurable mappings in non-compact spaces. By a decomposition of the error into encoding, approximation and reconstruction ...Report -
UnICORNN: A recurrent model for learning very long time dependencies
(2021)SAM Research ReportThe design of recurrent neural networks (RNNs) to accurately process sequential inputs with long-time dependencies is very challenging on account of the exploding and vanishing gradient problem. To overcome this, we propose a novel RNN architecture which is based on a structure preserving discretization of a Hamiltonian system of second-order ordinary differential equations that models networks of oscillators. The resulting RNN is fast, ...Report -
Higher-order Quasi-Monte Carlo Training of Deep Neural Networks
(2020)SAM Research ReportWe present a novel algorithmic approach and an error analysis leveraging Quasi-Monte Carlo points for training deep neural network (DNN) surrogates of Data-to-Observable (DtO) maps in engineering design. Our analysis reveals higher-order consistent, deterministic choices of training points in the input data space for deep and shallow Neural Networks with holomorphic activation functions such as tanh. These novel training points are proved ...Report -
Iterative Surrogate Model Optimization (ISMO): An active learning algorithm for PDE constrained optimization with deep neural networks
(2020)SAM Research ReportWe present a novel active learning algorithm, termed a iiterative surrogate model optimization (ISMO), for robust and efficient numerical approximation of PDE constrained optimization problems. This algorithm is based on deep neural networks and its key feature is the iterative selection of training data through a feedback loop between deep neural networks and any underlying standard optimization algorithm. Under suitable hypotheses, we ...Report