Show simple item record

dc.contributor.author
Lagorce, Xavier
dc.contributor.author
Ieng, Sio-Hoi
dc.contributor.author
Clady, Xavier
dc.contributor.author
Pfeiffer, Michael
dc.contributor.author
Benosman, Ryad B.
dc.date.accessioned
2019-06-05T09:37:15Z
dc.date.available
2017-06-11T17:16:19Z
dc.date.available
2019-06-05T09:37:15Z
dc.date.issued
2015-02-24
dc.identifier.issn
1662-453X
dc.identifier.issn
1662-4548
dc.identifier.other
10.3389/fnins.2015.00046
en_US
dc.identifier.uri
http://hdl.handle.net/20.500.11850/100578
dc.identifier.doi
10.3929/ethz-b-000100578
dc.description.abstract
Bio-inspired asynchronous event-based vision sensors are currently introducing a paradigm shift in visual information processing. These new sensors rely on a stimulus-driven principle of light acquisition similar to biological retinas. They are event-driven and fully asynchronous, thereby reducing redundancy and encoding exact times of input signal changes, leading to a very precise temporal resolution. Approaches for higher-level computer vision often rely on the reliable detection of features in visual frames, but similar definitions of features for the novel dynamic and event-based visual input representation of silicon retinas have so far been lacking. This article addresses the problem of learning and recognizing features for event-based vision sensors, which capture properties of truly spatiotemporal volumes of sparse visual event information. A novel computational architecture for learning and encoding spatiotemporal features is introduced based on a set of predictive recurrent reservoir networks, competing via winner-take-all selection. Features are learned in an unsupervised manner from real-world input recorded with event-based vision sensors. It is shown that the networks in the architecture learn distinct and task-specific dynamic visual features, and can predict their trajectories over time.
en_US
dc.format
application/pdf
en_US
dc.language.iso
en
en_US
dc.publisher
Frontiers Research Foundation
en_US
dc.rights.uri
http://creativecommons.org/licenses/by/4.0/
dc.subject
Echo-state networks
en_US
dc.subject
Spatiotemporal
en_US
dc.subject
Feature extraction
en_US
dc.subject
Recognition
en_US
dc.subject
Silicon retinas
en_US
dc.title
Spatiotemporal features for asynchronous event-based data
en_US
dc.type
Journal Article
dc.rights.license
Creative Commons Attribution 4.0 International
ethz.journal.title
Frontiers in Neuroscience
ethz.journal.volume
9
en_US
ethz.journal.abbreviated
Front Neurosci
ethz.pages.start
46
en_US
ethz.size
13 p.
en_US
ethz.version.deposit
publishedVersion
en_US
ethz.identifier.wos
ethz.identifier.scopus
ethz.identifier.nebis
009497874
ethz.publication.place
Lausanne
en_US
ethz.publication.status
published
en_US
ethz.leitzahl
03453 - Douglas, Rodney J.
en_US
ethz.leitzahl.certified
03453 - Douglas, Rodney J.
ethz.date.deposited
2017-06-11T17:16:30Z
ethz.source
ECIT
ethz.identifier.importid
imp593653257304238595
ethz.ecitpid
pub:157922
ethz.eth
yes
en_US
ethz.availability
Open access
en_US
ethz.rosetta.installDate
2017-07-25T11:15:17Z
ethz.rosetta.lastUpdated
2019-06-05T09:37:33Z
ethz.rosetta.versionExported
true
ethz.COinS
ctx_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.atitle=Spatiotemporal%20features%20for%20asynchronous%20event-based%20data&rft.jtitle=Frontiers%20in%20Neuroscience&rft.date=2015-02-24&rft.volume=9&rft.spage=46&rft.issn=1662-453X&1662-4548&rft.au=Lagorce,%20Xavier&Ieng,%20Sio-Hoi&Clady,%20Xavier&Pfeiffer,%20Michael&Benosman,%20Ryad%20B.&rft.genre=article&
 Search via SFX

Files in this item

Thumbnail

Publication type

Show simple item record