Metadata only
Date
2023-01Type
- Journal Article
Abstract
Characterizing the function spaces corresponding to neural networks can provide a way to understand their properties. In this paper we discuss how the theory of reproducing kernel Banach spaces can be used to tackle this challenge. In particular, we prove a representer theorem for a wide class of reproducing kernel Banach spaces that admit a suitable integral representation and include one hidden layer neural networks of possibly infinite width. Further, we show that, for a suitable class of ReLU activation functions, the norm in the corresponding reproducing kernel Banach space can be characterized in terms of the inverse Radon transform of a bounded real measure, with norm given by the total variation norm of the measure. Our analysis simplifies and extends recent results in [45], [36], [37]. Show more
Publication status
publishedExternal links
Journal / series
Applied and Computational Harmonic AnalysisVolume
Pages / Article No.
Publisher
ElsevierSubject
Neural networks; Reproducing kernel Banach spaces; Representer theorem; Radon transformOrganisational unit
09603 - Alaifari, Rima / Alaifari, Rima
Related publications and datasets
Is new version of: http://hdl.handle.net/20.500.11850/510985
More
Show all metadata