Search
Results
-
Construction of approximate entropy measure valued solutions for hyperbolic systems of conservation laws
(2014)Research ReportNumerical evidence is presented to demonstrate that state of the art numerical schemes need not converge to entropy solutions of systems of hyperbolic conservation laws in several space dimensions. Combined with recent results on the lack of stability of these solutions, we advocate the more general notion of entropy measure valued solutions as the appropriate paradigm for solutions of such multi-dimensional systems. We propose a detailed ...Report -
UnICORNN: A recurrent model for learning very long time dependencies
(2021)SAM Research ReportThe design of recurrent neural networks (RNNs) to accurately process sequential inputs with long-time dependencies is very challenging on account of the exploding and vanishing gradient problem. To overcome this, we propose a novel RNN architecture which is based on a structure preserving discretization of a Hamiltonian system of second-order ordinary differential equations that models networks of oscillators. The resulting RNN is fast, ...Report -
Higher-order Quasi-Monte Carlo Training of Deep Neural Networks
(2020)SAM Research ReportWe present a novel algorithmic approach and an error analysis leveraging Quasi-Monte Carlo points for training deep neural network (DNN) surrogates of Data-to-Observable (DtO) maps in engineering design. Our analysis reveals higher-order consistent, deterministic choices of training points in the input data space for deep and shallow Neural Networks with holomorphic activation functions such as tanh. These novel training points are proved ...Report -
Iterative Surrogate Model Optimization (ISMO): An active learning algorithm for PDE constrained optimization with deep neural networks
(2020)SAM Research ReportWe present a novel active learning algorithm, termed a iiterative surrogate model optimization (ISMO), for robust and efficient numerical approximation of PDE constrained optimization problems. This algorithm is based on deep neural networks and its key feature is the iterative selection of training data through a feedback loop between deep neural networks and any underlying standard optimization algorithm. Under suitable hypotheses, we ...Report -
Error estimates for physics informed neural networks approximating the Navier-Stokes equations
(2022)SAM Research ReportWe prove rigorous bounds on the errors resulting from the approximation of the incompressible Navier-Stokes equations with (extended) physics informed neural networks. We show that the underlying PDE residual can be made arbitrarily small for tanh neural networks with two hidden layers. Moreover, the total error can be estimated in terms of the training error, network size and number of quadrature points. The theory is illustrated with ...Report -
On the discrete equation model for compressible multiphase fluid flows
(2022)SAM Research ReportThe modeling of multi-phase flow is very challenging, given the range of scales as well as the diversity of flow regimes that one encounters in this context. We revisit the discrete equation method (DEM) for two-phase flow in the absence of heat conduction and mass transfer. We analyze the resulting probability coefficients and prove their local convexity, rigorously establishing that our version of DEM can model different flow regimes ...Report -
Agnostic Physics-Driven Deep Learning
(2022)SAM Research ReportThis work establishes that a physical system can perform statistical learning without gradient computations, via an \emph{Agnostic Equilibrium Propagation} (AEqprop) procedure that combines energy minimization, homeostatic control, and nudging towards the correct response. In AEqprop, the specifics of the system do not have to be known: the procedure is based only on external manipulations, and produces a stochastic gradient descent without ...Report -
Variable-Input Deep Operator Networks
(2022)SAM Research ReportExisting architectures for operator learning require that the number and locations of sensors (where the input functions are evaluated) remain the same across all training and test samples, significantly restricting the range of their applicability. We address this issue by proposing a novel operator learning framework, termed Variable-Input Deep Operator Network (VIDON), which allows for random sensors whose number and locations can vary ...Report -
Entropy conservative and entropy stable schemes for non-conservative hyperbolic systems
(2011)Research ReportThe vanishing viscosity limit of non-conservative hyperbolic systems depends heavily on the speci c form of the viscosity. Numerical approximations, such as the path consistent schemes of [16], may not converge to the physically relevant solutions of the system. We construct entropy stable path consistent (ESPC) schemes to approximate non-conservative hyperbolic systems by combining entropy conservative discretizations with numerical ...Report