Show simple item record

dc.contributor.author
Moeys, Diederik P.
dc.contributor.author
Neil, Daniel
dc.contributor.author
Corradi, Federico
dc.contributor.author
Kerr, Emmett
dc.contributor.author
Vance, Philip
dc.contributor.author
Das, Gautham
dc.contributor.author
Coleman, Sonya A.
dc.contributor.author
McGinnity, Thomas M.
dc.contributor.author
Kerr, Dermot
dc.contributor.author
Delbrück, Tobias
dc.date.accessioned
2019-02-22T10:08:04Z
dc.date.available
2019-01-30T09:46:33Z
dc.date.available
2019-02-22T10:08:04Z
dc.date.issued
2018-07-02
dc.identifier.uri
http://hdl.handle.net/20.500.11850/321268
dc.description.abstract
Machine vision systems using convolutional neural networks (CNNs) for robotic applications are increasingly being developed. Conventional vision CNNs are driven by camera frames at constant sample rate, thus achieving a fixed latency and power consumption tradeoff. This paper describes further work on the first experiments of a closed-loop robotic system integrating a CNN together with a Dynamic and Active Pixel Vision Sensor (DAVIS) in a predator/prey scenario. The DAVIS, mounted on the predator Summit XL robot, produces frames at a fixed 15 Hz frame-rate and Dynamic Vision Sensor (DVS) histograms containing 5k ON and OFF events at a variable frame-rate ranging from 15-500 Hz depending on the robot speeds. In contrast to conventional frame-based systems, the latency and processing cost depends on the rate of change of the image. The CNN is trained offline on the 1.25h labeled dataset to recognize the position and size of the prey robot, in the field of view of the predator. During inference, combining the ten output classes of the CNN allows extracting the analog position vector of the prey relative to the predator with a mean 8.7% error in angular estimation. The system is compatible with conventional deep learning technology, but achieves a variable latency-power tradeoff that adapts automatically to the dynamics. Finally, investigations on the robustness of the algorithm, a human performance comparison and a deconvolution analysis are also explored.
en_US
dc.language.iso
en
en_US
dc.publisher
Cornell University
en_US
dc.title
PRED18: Dataset and Further Experiments with DAVIS Event Camera in Predator-Prey Robot Chasing
en_US
dc.type
Working Paper
ethz.journal.title
arXiv
ethz.pages.start
1807.03128
en_US
ethz.size
8 p.
en_US
ethz.identifier.arxiv
1807.03128
ethz.publication.place
Ithaca, NY
en_US
ethz.leitzahl
ETH Zürich::00002 - ETH Zürich::00012 - Lehre und Forschung::00007 - Departemente::02140 - Dep. Inf.technologie und Elektrotechnik / Dep. of Inform.Technol. Electrical Eng.::02533 - Institut für Neuroinformatik / Institute of Neuroinformatics
en_US
ethz.date.deposited
2019-01-30T09:46:45Z
ethz.source
FORM
ethz.eth
yes
en_US
ethz.availability
Metadata only
en_US
ethz.rosetta.installDate
2019-02-22T10:08:13Z
ethz.rosetta.lastUpdated
2019-02-22T10:08:13Z
ethz.rosetta.exportRequired
false
ethz.rosetta.versionExported
true
ethz.COinS
ctx_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.atitle=PRED18:%20Dataset%20and%20Further%20Experiments%20with%20DAVIS%20Event%20Camera%20in%20Predator-Prey%20Robot%20Chasing&rft.jtitle=arXiv&rft.date=2018-07-02&rft.spage=1807.03128&rft.au=Moeys,%20Diederik%20P.&Neil,%20Daniel&Corradi,%20Federico&Kerr,%20Emmett&Vance,%20Philip&rft.genre=preprint&
 Search via SFX

Files in this item

FilesSizeFormatOpen in viewer

There are no files associated with this item.

Publication type

Show simple item record