Journal: Constructive Approximation
Loading...
Abbreviation
Constr Approx
Publisher
Springer
9 results
Search Results
Publications1 - 9 of 9
- Sparse approximation of singularity functionsItem type: Journal Article
Constructive ApproximationNitsche, Pal-Andrej (2004) - Global universal approximation of functional input maps on weighted spacesItem type: Journal Article
Constructive ApproximationCuchiero, Christa; Schmocker, Philipp; Teichmann, Josef (2025)We introduce so-called functional input neural networks defined on a possibly infinite dimensional weighted space with values also in a possibly infinite dimensional output space. To this end, we use an additive family to map the input weighted space to the hidden layer, on which a non-linear scalar activation function is applied to each neuron, and finally return the output via some linear readouts. Relying on Stone-Weierstrass theorems on weighted spaces, we can prove a global universal approximation result on weighted spaces for continuous functions going beyond the usual approximation on compact sets. This then applies in particular to approximation of (non-anticipative) path space functionals via functional input neural networks. As a further application of the weighted Stone-Weierstrass theorem we prove a global universal approximation result for linear functions of the signature. We also introduce the viewpoint of Gaussian process regression in this setting and emphasize that the reproducing kernel Hilbert space of the signature kernels are Cameron-Martin spaces of certain Gaussian processes. This paves a way towards uncertainty quantification for signature kernel regression. - Neural Network Identifiability for a Family of Sigmoidal NonlinearitiesItem type: Journal Article
Constructive ApproximationVlačić, Verner; Bölcskei, Helmut (2022)This paper addresses the following question of neural network identifiability: Does the input–output map realized by a feed-forward neural network with respect to a given nonlinearity uniquely specify the network architecture, weights, and biases? The existing literature on the subject (Sussman in Neural Netw 5(4):589–593, 1992; Albertini et al. in Artificial neural networks for speech and vision, 1993; Fefferman in Rev Mat Iberoam 10(3):507–555, 1994) suggests that the answer should be yes, up to certain symmetries induced by the nonlinearity, and provided that the networks under consideration satisfy certain “genericity conditions.” The results in Sussman (1992) and Albertini et al. (1993) apply to networks with a single hidden layer and in Fefferman (1994) the networks need to be fully connected. In an effort to answer the identifiability question in greater generality, we derive necessary genericity conditions for the identifiability of neural networks of arbitrary depth and connectivity with an arbitrary nonlinearity. Moreover, we construct a family of nonlinearities for which these genericity conditions are minimal, i.e., both necessary and sufficient. This family is large enough to approximate many commonly encountered nonlinearities to within arbitrary precision in the uniform norm. - Best m-Term Approximation and Sobolev-Besov Spaces of Dominating Mixed SmoothnessItem type: Journal Article
Constructive ApproximationHansen, Markus; Sickel, Winfried (2012) - DNN Expression Rate Analysis of High-Dimensional PDEs: Application to Option PricingItem type: Journal Article
Constructive ApproximationElbrächter, Dennis; Grohs, Philipp; Jentzen, Arnulf; et al. (2022)We analyze approximation rates by deep ReLU networks of a class of multivariate solutions of Kolmogorov equations which arise in option pricing. Key technical devices are deep ReLU architectures capable of efficiently approximating tensor products. Combining this with results concerning the approximation of well-behaved (i.e., fulfilling some smoothness properties) univariate functions, this provides insights into rates of deep ReLU approximation of multivariate functions with tensor structures. We apply this in particular to the model problem given by the price of a European maximum option on a basket of d assets within the Black-Scholes model for European maximum option pricing. We prove that the solution to the d-variate option pricing problem can be approximated up to an epsilon-error by a deep ReLU network with depth O(ln(d) ln(epsilon(-1)) + ln(d)(2)) and O(d(2+1/n) epsilon(-1/n)) nonzero weights, where n is an element of N is arbitrary (with the constant implied in O(center dot) depending on n). The techniques developed in the constructive proof are of independent interest in the analysis of the expressive power of deep neural networks for solution manifolds of PDEs in high dimension. - Global Universal Approximation of Functional Input Maps on Weighted SpacesItem type: Journal Article
Constructive ApproximationCuchiero, Christa; Schmocker, Philipp; Teichmann, Josef (2026)We introduce so-called functional input neural networks defined on a possibly infinite dimensional weighted space with values also in a possibly infinite dimensional output space. To this end, we use an additive family to map the input weighted space to the hidden layer, on which a non-linear scalar activation function is applied to each neuron, and finally return the output via some linear readouts. Relying on Stone-Weierstrass theorems on weighted spaces, we can prove a global universal approximation result on weighted spaces for continuous functions going beyond the usual approximation on compact sets. This then applies in particular to approximation of (non-anticipative) path space functionals via functional input neural networks. As a further application of the weighted Stone-Weierstrass theorem we prove a global universal approximation result for linear functions of the signature. We also introduce the viewpoint of Gaussian process regression in this setting and emphasize that the reproducing kernel Hilbert space of the signature kernels are Cameron-Martin spaces of certain Gaussian processes. This paves a way towards uncertainty quantification for signature kernel regression. - Best N term approximation spaces for tensor product wavelet basesItem type: Journal Article
Constructive ApproximationNitsche, Pal-Andrej (2006) - An Adaptive Wavelet Method for Solving High-Dimensional Elliptic PDEsItem type: Journal Article
Constructive ApproximationDijkema, Tammo J.; Schwab, Christoph; Stevenson, Rob (2009) - Exponential ReLU DNN Expression of Holomorphic Maps in High DimensionItem type: Journal Article
Constructive ApproximationOpschoor, Joost A.A.; Schwab, Christoph; Zech, Jakob (2022)For a parameter dimension d∈ N, we consider the approximation of many-parametric maps u:[-1,1]d→R by deep ReLU neural networks. The input dimension d may possibly be large, and we assume quantitative control of the domain of holomorphy of u: i.e., u admits a holomorphic extension to a Bernstein polyellipse Eρ1×⋯×Eρd⊂Cd of semiaxis sums ρi> 1 containing [-1,1]d. We establish the exponential rate O(exp(-bN1/(d+1))) of expressive power in terms of the total NN size N and of the input dimension d of the ReLU NN in W1,∞([-1,1]d). The constant b> 0 depends on (ρj)j=1d which characterizes the coordinate-wise sizes of the Bernstein-ellipses for u. We also prove exponential convergence in stronger norms for the approximation by DNNs with more regular, so-called “rectified power unit” activations. Finally, we extend DNN expression rate bounds also to two classes of non-holomorphic functions, in particular to d-variate, Gevrey-regular functions, and, by composition, to certain multivariate probability distribution functions with Lipschitz marginals.
Publications1 - 9 of 9