Integrating Event-based Dynamic Vision Sensors with Sparse Hyperdimensional Computing: A Low-power Accelerator with Online Capability

Open access
Date
2020-08Type
- Conference Paper
Abstract
We propose to embed features extracted from event-driven dynamic vision sensors to binary sparse representations in hyperdimensional (HD) space for regression. This embedding compresses events generated across 346x260 differential pixels to a sparse 8160-bit vector by applying random activation functions. The sparse representation not only simplifies inference, but also enables online learning with the same memory footprint. Specifically, it allows efficient updates by retaining binary vector components over the course of online learning that cannot be otherwise achieved with dense representations demanding multibit vector components. We demonstrate online learning capability: using estimates and confidences of an initial model trained with only 25% of data, our method continuously updates the model for the remaining 75% of data, resulting in a close match with accuracy obtained with an oracle model on ground truth labels. When mapped on an 8-core accelerator, our method also achieves lower error, latency, and energy compared to other sparse/dense alternatives. Furthermore, it is 9.84x more energy-efficient and 6.25x faster than an optimized 9-layer perceptron with comparable accuracy. Show more
Permanent link
https://doi.org/10.3929/ethz-b-000425534Publication status
publishedExternal links
Book title
ISLPED '20: Proceedings of the ACM/IEEE International Symposium on Low Power Electronics and DesignPages / Article No.
Publisher
Association for Computing MachineryEvent
Subject
Sparse HD computing; Robotics; Event-based cameras; RegressionOrganisational unit
03996 - Benini, Luca / Benini, Luca
Funding
780215 - Computation-in-memory architecture based on resistive devices (EC)
More
Show all metadata