Show simple item record

dc.contributor.author
Schmuck, Manuel
dc.contributor.author
Benini, Luca
dc.contributor.author
Rahimi, Abbas
dc.date.accessioned
2019-10-30T14:56:41Z
dc.date.available
2019-04-16T14:15:04Z
dc.date.available
2019-04-17T06:44:10Z
dc.date.available
2019-05-07T10:14:22Z
dc.date.available
2019-10-30T14:56:41Z
dc.date.issued
2019-10-04
dc.identifier.issn
1550-4832
dc.identifier.issn
1550-4840
dc.identifier.other
10.1145/3314326
en_US
dc.identifier.uri
http://hdl.handle.net/20.500.11850/338354
dc.identifier.doi
10.3929/ethz-b-000338354
dc.description.abstract
Brain-inspired hyperdimensional (HD) computing models neural activity patterns of the very size of the brain's circuits with points of a hyperdimensional space, that is, with hypervectors. Hypervectors are D-dimensional (pseudo)random vectors with independent and identically distributed (i.i.d.) components constituting ultra-wide holographic words: D=10,000 bits, for instance. At its very core, HD computing manipulates a set of seed hypervectors to build composite hypervectors representing objects of interest. It demands memory optimizations with simple operations for an efficient hardware realization. In this paper, we propose hardware techniques for optimizations of HD computing, in a synthesizable open-source VHDL library, to enable co-located implementation of both learning and classification tasks on only a small portion of Xilinx UltraScale FPGAs: (1) We propose simple logical operations to rematerialize the hypervectors on the fly rather than loading them from memory. These operations massively reduce the memory footprint by directly computing the composite hypervectors whose individual seed hypervectors do not need to be stored in memory. (2) Bundling a series of hypervectors over time requires a multibit counter per every hypervector component. We instead propose a binarized back-to-back bundling without requiring any counters. This truly enables on-chip learning with minimal resources as every hypervector component remains binary over the course of training to avoid otherwise multibit components. (3) For every classification event, an associative memory is in charge of finding the closest match between a set of learned hypervectors and a query hypervector by using a distance metric. This operator is proportional to hypervector dimension (D), and hence may take O(D) cycles per classification event. Accordingly, we significantly improve the throughput of classification by proposing associative memories that steadily reduce the latency of classification to the extreme of a single cycle. (4) We perform a design space exploration incorporating the proposed techniques on FPGAs for a wearable biosignal processing application as a case study. Our techniques achieve up to 2.39X area saving, or 2337X throughput improvement. The Pareto optimal HD architecture is mapped on only 18340 configurable logic blocks (CLBs) to learn and classify five hand gestures using four electromyography sensors.
en_US
dc.format
application/pdf
en_US
dc.language.iso
en
en_US
dc.publisher
Association for Computing Machinery
en_US
dc.rights.uri
http://rightsstatements.org/page/InC-NC/1.0/
dc.subject
FPGA
en_US
dc.subject
Machine learning
en_US
dc.subject
Electromyography
en_US
dc.subject
Biosignal processing
en_US
dc.subject
Brain-inspired computing
en_US
dc.title
Hardware Optimizations of Dense Binary Hyperdimensional Computing: Rematerialization of Hypervectors, Binarized Bundling, and Combinational Associative Memory
en_US
dc.type
Journal Article
dc.rights.license
In Copyright - Non-Commercial Use Permitted
ethz.journal.title
ACM Journal on Emerging Technologies in Computing Systems
ethz.journal.volume
15
en_US
ethz.journal.issue
4
en_US
ethz.journal.abbreviated
ACM j. emerg. technol. comput. syst.
ethz.pages.start
32
en_US
ethz.size
25 p.
en_US
ethz.version.deposit
acceptedVersion
en_US
ethz.grant
Computation-in-memory architecture based on resistive devices
en_US
ethz.grant
ETH Zurich Postdoctoral Fellowship Program II
en_US
ethz.identifier.wos
ethz.identifier.scopus
ethz.publication.place
New York, NY
en_US
ethz.publication.status
published
en_US
ethz.leitzahl
ETH Zürich::00002 - ETH Zürich::00012 - Lehre und Forschung::00007 - Departemente::02140 - Dep. Inf.technologie und Elektrotechnik / Dep. of Inform.Technol. Electrical Eng.::02636 - Institut für Integrierte Systeme / Integrated Systems Laboratory::03996 - Benini, Luca / Benini, Luca
en_US
ethz.leitzahl.certified
ETH Zürich::00002 - ETH Zürich::00012 - Lehre und Forschung::00007 - Departemente::02140 - Dep. Inf.technologie und Elektrotechnik / Dep. of Inform.Technol. Electrical Eng.::02636 - Institut für Integrierte Systeme / Integrated Systems Laboratory::03996 - Benini, Luca / Benini, Luca
en_US
ethz.tag
Hyperdimensional computing
en_US
ethz.tag
Rematerialization
en_US
ethz.tag
Binarized temporal bundling
en_US
ethz.tag
Combinational associative memory
en_US
ethz.grant.agreementno
780215
ethz.grant.agreementno
608881
ethz.grant.agreementno
780215
ethz.grant.agreementno
608881
ethz.grant.fundername
EC
ethz.grant.fundername
EC
ethz.grant.fundername
EC
ethz.grant.fundername
EC
ethz.grant.funderDoi
10.13039/501100000780
ethz.grant.funderDoi
10.13039/501100000780
ethz.grant.funderDoi
10.13039/501100000780
ethz.grant.funderDoi
10.13039/501100000780
ethz.grant.program
H2020
ethz.grant.program
FP7
ethz.grant.program
H2020
ethz.grant.program
FP7
ethz.date.deposited
2019-04-16T14:15:16Z
ethz.source
FORM
ethz.eth
yes
en_US
ethz.availability
Open access
en_US
ethz.rosetta.installDate
2019-10-30T14:56:55Z
ethz.rosetta.lastUpdated
2022-03-28T23:59:28Z
ethz.rosetta.versionExported
true
ethz.COinS
ctx_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.atitle=Hardware%20Optimizations%20of%20Dense%20Binary%20Hyperdimensional%20Computing:%20Rematerialization%20of%20Hypervectors,%20Binarized%20Bundling,%20and%20Combinational%20A&rft.jtitle=ACM%20Journal%20on%20Emerging%20Technologies%20in%20Computing%20Systems&rft.date=2019-10-04&rft.volume=15&rft.issue=4&rft.spage=32&rft.issn=1550-4832&1550-4840&rft.au=Schmuck,%20Manuel&Benini,%20Luca&Rahimi,%20Abbas&rft.genre=article&rft_id=info:doi/10.1145/3314326&
 Search print copy at ETH Library

Files in this item

Thumbnail

Publication type

Show simple item record