Metadata only
Date
2017Type
- Conference Paper
Abstract
Many practical machine learning tasks employ very deep convolutional neural networks. Such large depths pose formidable computational challenges in training and operating the network. It is therefore important to understand how many layers are actually needed to have most of the input signal's features be contained in the feature vector generated by the network. This question can be formalized by asking how quickly the energy contained in the feature maps decays across layers. In addition, it is desirable that none of the input signal's features be “lost” in the feature extraction network or, more formally, we want energy conservation in the sense of the energy contained in the feature vector being proportional to that of the corresponding input signal. This paper establishes conditions for energy conservation for a wide class of deep convolutional neural networks and characterizes corresponding feature map energy decay rates. Specifically, we consider general scattering networks, and find that under mild analyticity and high-pass conditions on the filters (which encompass, inter alia, various constructions of Weyl-Heisenberg filters, wavelets, ridgelets,(α)-curvelets, and shearlets) the feature map energy decays at least polynomially. For broad families of wavelets and Weyl-Heisenberg filters, the guaranteed decay rate is shown to be exponential. Our results yield handy estimates of the number of layers needed to have at least ((1 - ε) · 100)% of the input signal energy be contained in the feature vector. Show more
Publication status
publishedExternal links
Book title
2017 IEEE International Symposium on Information Theory (ISIT)Pages / Article No.
Publisher
IEEEEvent
Organisational unit
03610 - Boelcskei, Helmut / Boelcskei, Helmut
More
Show all metadata