Audio classification systems using deep neural networks and an event-driven auditory sensor


METADATA ONLY
Loading...

Date

2019

Publication Type

Conference Paper

ETH Bibliography

yes

Citations

Altmetric
METADATA ONLY

Data

Rights / License

Abstract

We describe ongoing research in developing audio classification systems that use a spiking silicon cochlea as the front end. Event-driven features extracted from the spikes are fed to deep networks for the intended task. We describe a classification task on naturalistic audio sounds using a low-power silicon cochlea that outputs asynchronous events through a send-on-delta encoding of its sharply-tuned cochlea channels. Because of the event-driven nature of the processing, silences in these naturalistic sounds lead to corresponding absence of cochlea spikes and savings in computes. Results show 48% savings in computes with a small loss in accuracy using cochlea events. © 2019 IEEE.

Publication status

published

Editor

Book title

Proceedings of the 2019 IEEE Sensors

Journal / series

Volume

Pages / Article No.

8956592

Publisher

IEEE

Event

18th IEEE Sensors Conference (SENSORS 2019)

Edition / version

Methods

Software

Geographic location

Date collected

Date created

Subject

Event-driven audio; Edge computing; spiking cochlea; Deep learning; Sound classification; Low-power cochlea

Organisational unit

02533 - Institut für Neuroinformatik / Institute of Neuroinformatics

Notes

Funding

Related publications and datasets