Metadata only
Date
2016Type
- Conference Paper
Abstract
This paper presents a real-time multi-modal spiking Deep Neural Network (DNN) implemented on an FPGA platform. The hardware DNN system, called n-Minitaur, demonstrates a 4-fold improvement in computational speed over the previous DNN FPGA system. The proposed system directly interfaces two different event-based sensors: a Dynamic Vision Sensor (DVS) and a Dynamic Audio Sensor (DAS). The DNN for this bimodal hardware system is trained on the MNIST digit dataset and a set of unique audio tones for each digit. When tested on the spikes produced by each sensor alone, the classification accuracy is around 70% for DVS spikes generated in response to displayed MNIST images, and 60% for DAS spikes generated in response to noisy tones. The accuracy increases to 98% when spikes from both modalities are provided simultaneously. In addition, the system shows a fast latency response of only 5ms. Show more
Publication status
publishedExternal links
Book title
2016 IEEE International Symposium on Circuits and Systems (ISCAS)Pages / Article No.
Publisher
IEEEEvent
Subject
Spiking Deep Networks; Dynamic Vision Sensor; Event-driven sensors; Sensor fusion; Hardware spiking networkMore
Show all metadata