Tim De Ryck


Loading...

Last Name

De Ryck

First Name

Tim

Organisational unit

Search Results

Publications 1 - 10 of 22
  • Prasthofer, Michael; De Ryck, Tim; Mishra, Siddhartha (2022)
    SAM Research Report
    Existing architectures for operator learning require that the number and locations of sensors (where the input functions are evaluated) remain the same across all training and test samples, significantly restricting the range of their applicability. We address this issue by proposing a novel operator learning framework, termed Variable-Input Deep Operator Network (VIDON), which allows for random sensors whose number and locations can vary across samples. VIDON is invariant to permutations of sensor locations and is proved to be universal in approximating a class of continuous operators. We also prove that VIDON can efficiently approximate operators arising in PDEs. Numerical experiments with a diverse set of PDEs are presented to illustrate the robust performance of VIDON in learning operators.
  • De Ryck, Tim (2020)
    Deep neural networks and the ENO procedure are both efficient frameworks for approximating rough functions. We prove that at any order, the stencil shifts of the ENO and ENO-SR interpolation procedures can be exactly obtained using a deep ReLU neural network. In addition, we construct and provide error bounds for ReLU neural networks that directly approximate the output of the ENO and ENO- SR interpolation procedures. This surprising fact enables the transfer of several desirable properties of the ENO procedure to deep neural networks, including its high-order accuracy at approximating Lipschitz functions. Numerical tests for the resulting neural networks show excellent performance for interpolating rough functions, data compression and approximating solutions of nonlinear conservation laws.
  • De Ryck, Tim; Mishra, Siddhartha (2022)
    Advances in Computational Mathematics
    Physics-informed neural networks approximate solutions of PDEs by minimizing pointwise residuals. We derive rigorous bounds on the error, incurred by PINNs in approximating the solutions of a large class of linear parabolic PDEs, namely Kolmogorov equations that include the heat equation and Black-Scholes equation of option pricing, as examples. We construct neural networks, whose PINN residual (generalization error) can be made as small as desired. We also prove that the total L2-error can be bounded by the generalization error, which in turn is bounded in terms of the training error, provided that a sufficient number of randomly chosen training (collocation) points is used. Moreover, we prove that the size of the PINNs and the number of training samples only grow polynomially with the underlying dimension, enabling PINNs to overcome the curse of dimensionality in this context. These results enable us to provide a comprehensive error analysis for PINNs in approximating Kolmogorov PDEs.
  • De Ryck, Tim; Lanthaler, Samuel; Mishra, Siddhartha (2021)
    Neural Networks
    We derive bounds on the error, in high-order Sobolev norms, incurred in the approximation of Sobolev-regular as well as analytic functions by neural networks with the hyperbolic tangent activation function. These bounds provide explicit estimates on the approximation error with respect to the size of the neural networks. We show that tanh neural networks with only two hidden layers suffice to approximate functions at comparable or better rates than much deeper ReLU neural networks.
  • De Ryck, Tim; Bonnet, Florent; Mishra, Siddhartha; et al. (2023)
    SAM Research Report
    In this paper, we investigate the behavior of gradient descent algorithms in physics-informed machine learning methods like PINNs, which minimize resid uals connected to partial differential equations (PDEs). Our key result is that the difficulty in training these models is closely related to the conditioning of a specific differential operator. This operator, in turn, is associated to the Hermi tian square of the differential operator of the underlying PDE. If this operator is ill-conditioned, it results in slow or infeasible training. Therefore, precondition ing this operator is crucial. We employ both rigorous mathematical analysis and empirical evaluations to investigate various strategies, explaining how they better condition this critical operator, and consequently improve training.
  • Raonic, Bogdan; Molinaro, Roberto; De Ryck, Tim; et al. (2024)
    Advances in Neural Information Processing Systems 36
    Although very successfully used in conventional machine learning, convolution based neural network architectures - believed to be inconsistent in function space - have been largely ignored in the context of learning solution operators of PDEs. Here, we present novel adaptations for convolutional neural networks to demonstrate that they are indeed able to process functions as inputs and outputs. The resulting architecture, termed as convolutional neural operators (CNOs), is designed specifically to preserve its underlying continuous nature, even when implemented in a discretized form on a computer. We prove a universality theorem to show that CNOs can approximate operators arising in PDEs to desired accuracy. CNOs are tested on a novel suite of benchmarks, encompassing a diverse set of PDEs with possibly multi-scale solutions and are observed to significantly outperform baselines, paving the way for an alternative framework for robust and accurate operator learning.
  • De Ryck, Tim; Mishra, Siddhartha; Molinaro, Roberto (2022)
    SAM Research Report
    Physics informed neural networks (PINNs) require regularity of solutions of the underlying PDE to guarantee accurate approximation. Consequently, they may fail at approximating discontinuous solutions of PDEs such as nonlinear hyperbolic equations. To ameliorate this, we propose a novel variant of PINNs, termed as weak PINNs (wPINNs) for accurate approximation of entropy solutions of scalar conservation laws. wPINNs are based on approximating the solution of a min-max optimization problem for a residual, defined in terms of Kruzkhov entropies, to determine parameters for the neural networks approximating the entropy solution as well as test functions. We prove rigorous bounds on the error incurred by wPINNs and illustrate their performance through numerical experiments to demonstrate that wPINNs can approximate entropy solutions accurately
  • De Ryck, Tim; Mishra, Siddhartha (2024)
    Acta Numerica
    Physics-informed neural networks (PINNs) and their variants have been very popular in recent years as algorithms for the numerical simulation of both forward and inverse problems for partial differential equations. This article aims to provide a comprehensive review of currently available results on the numerical analysis of PINNs and related models that constitute the backbone of physics-informed machine learning. We provide a unified framework in which analysis of the various components of the error incurred by PINNs in approximating PDEs can be effectively carried out. We present a detailed review of available results on approximation, generalization and training errors and their behaviour with respect to the type of the PDE and the dimension of the underlying domain. In particular, we elucidate the role of the regularity of the solutions and their stability to perturbations in the error analysis. Numerical results are also presented to illustrate the theory. We identify training errors as a key bottleneck which can adversely affect the overall performance of various models in physics-informed machine learning.
  • De Ryck, Tim; Mishra, Siddhartha (2022)
    SAM Research Report
    We derive rigorous bounds on the error resulting from the approximation of the solution of parametric hyperbolic scalar conservation laws with ReLU neural networks. We show that the approximation error can be made as small as desired with ReLU neural networks that overcome the curse of dimensionality. In addition, we provide an explicit upper bound on the generalization error in terms of the training error, number of training samples and the neural network size. The theoretical results are illustrated by numerical experiments.
  • De Ryck, Tim (2024)
    Physics-informed machine learning is a popular framework that allows the numerical simulation of both forward and inverse problems for partial differential equations without needing any (potentially expensive) training data. It consists of the optimization of a parametric model based on a PDE residual-based loss function, with the most famous example being physics-informed neural networks (PINNs). This thesis addresses the relatively paucity in mathematical guarantees concerning the performance of these physics-informed models by proposing a unified framework in which the numerical analysis of the various components of the incurred error can be effectively carried out. We develop rigorous results on approximation, generalization and training errors and investigate their behavior with respect to the dimension of the underlying domain and the type of the PDE, with the Navier-Stokes equations, high-dimensional linear Kolmogorov equations and inviscid scalar conservation laws serving as primary case studies. In order to provide bounds on the approximation error, we prove that neural networks can approximate Sobolev regular functions arbitrarily well in higher-order Sobolev norms and that generic approximation results for neural networks, PINNs, and (physics-informed) operator learning can be obtained from each other after verifying a few assumptions. We demonstrate that approximation results can be obtained even when PDE solutions are discontinuous and the physics-informed loss function is based on a weak PDE residual, such as for weak PINNs for scalar conservation laws. Next, we prove for a number of PDEs that a small physics-informed loss indeed implies a small L 2 -error and investigate whether such stability holds as well for extended PINNs (XPINNs) and conservative PINNs (cPINNs). We follow up by providing upper bounds on the generalization gap, leading to guidelines on the necessary size of the training set for the model to generalize well to the whole (unseen) domain. Finally, we investigate the behavior of gradient descent algorithms in physics-informed machine learning and find that the difficulty in training these models is closely related to the conditioning of a differential operator associated to the Hermitian square of the differential operator of the underlying PDE, and to the chosen model. If this operator is ill-conditioned, it results in slow or infeasible training, which suggest that preconditioning this operator is crucial. We employ both rigorous mathematical analysis and empirical evaluations to investigate various strategies, explaining how they better condition this critical operator, and consequently improve training.
Publications 1 - 10 of 22