Conversion of continuous-valued deep networks to efficient event-driven networks for image classification
Open access
Date
2017-12Type
- Journal Article
Abstract
Spiking neural networks (SNNs) can potentially offer an efficient way of doing inference because the neurons in the networks are sparsely activated and computations are event-driven. Previous work showed that simple continuous-valued deep Convolutional Neural Networks (CNNs) can be converted into accurate spiking equivalents. These networks did not include certain common operations such as max-pooling, softmax, batch-normalization and Inception-modules. This paper presents spiking equivalents of these operations therefore allowing conversion of nearly arbitrary CNN architectures. We show conversion of popular CNN architectures, including VGG-16 and Inception-v3, into SNNs that produce the best results reported to date on MNIST, CIFAR-10 and the challenging ImageNet dataset. SNNs can trade off classification error rate against the number of available operations whereas deep continuous-valued neural networks require a fixed number of operations to achieve their classification error rate. From the examples of LeNet for MNIST and BinaryNet for CIFAR-10, we show that with an increase in error rate of a few percentage points, the SNNs can achieve more than 2x reductions in operations compared to the original CNNs. This highlights the potential of SNNs in particular when deployed on power-efficient neuromorphic spiking neuron chips, for use in embedded applications. Show more
Permanent link
https://doi.org/10.3929/ethz-b-000224685Publication status
publishedExternal links
Journal / series
Frontiers in NeuroscienceVolume
Pages / Article No.
Publisher
Frontiers MediaSubject
artificial neural network; spiking neural network; deep learning; object classification; deep networks; spiking network conversionOrganisational unit
02533 - Institut für Neuroinformatik / Institute of Neuroinformatics
More
Show all metadata