Online Training of Spiking Recurrent Neural Networks with Phase-Change Memory Synapses
dc.contributor.author
Demirağ, Yiğit
dc.contributor.author
Frenkel, Charlotte
dc.contributor.author
Payvand, Melika
dc.contributor.author
Indiveri, Giacomo
dc.date.accessioned
2022-02-16T06:39:15Z
dc.date.available
2022-01-27T18:15:01Z
dc.date.available
2022-02-16T06:39:15Z
dc.date.issued
2021-08-25
dc.identifier.uri
http://hdl.handle.net/20.500.11850/529333
dc.identifier.doi
10.3929/ethz-b-000529333
dc.description.abstract
Spiking recurrent neural networks (RNNs) are a promising tool for solving a wide variety of complex cognitive and motor tasks, due to their rich temporal dynamics and sparse processing. However training spiking RNNs on dedicated neuromorphic hardware is still an open challenge. This is due mainly to the lack of local, hardware-friendly learning mechanisms that can solve the temporal credit assignment problem and ensure stable network dynamics, even when the weight resolution is limited. These challenges are further accentuated, if one resorts to using memristive devices for in-memory computing to resolve the von-Neumann bottleneck problem, at the expense of a substantial increase in variability in both the computation and the working memory of the spiking RNNs. To address these challenges and enable online learning in memristive neuromorphic RNNs, we present a simulation framework of differential-architecture crossbar arrays based on an accurate and comprehensive Phase-Change Memory (PCM) device model. We train a spiking RNN whose weights are emulated in the presented simulation framework, using a recently proposed e-prop learning rule. Although e-prop locally approximates the ideal synaptic updates, it is difficult to implement the updates on the memristive substrate due to substantial PCM non-idealities. We compare several widely adapted weight update schemes that primarily aim to cope with these device non-idealities and demonstrate that accumulating gradients can enable online and efficient training of spiking RNN on memristive substrates.
en_US
dc.format
application/pdf
dc.language.iso
en
en_US
dc.publisher
Cornell University
en_US
dc.rights.uri
http://creativecommons.org/licenses/by/4.0/
dc.title
Online Training of Spiking Recurrent Neural Networks with Phase-Change Memory Synapses
en_US
dc.type
Working Paper
dc.rights.license
Creative Commons Attribution 4.0 International
ethz.journal.title
arXiv
ethz.pages.start
2108.01804
en_US
ethz.size
24 p.
en_US
ethz.version.edition
v2
en_US
ethz.identifier.arxiv
2108.01804
ethz.publication.place
Ithaca, NY
en_US
ethz.publication.status
published
en_US
ethz.leitzahl
ETH Zürich::00002 - ETH Zürich::00012 - Lehre und Forschung::00007 - Departemente::02140 - Dep. Inf.technologie und Elektrotechnik / Dep. of Inform.Technol. Electrical Eng.::02533 - Institut für Neuroinformatik / Institute of Neuroinformatics::09699 - Indiveri, Giacomo / Indiveri, Giacomo
en_US
ethz.leitzahl.certified
ETH Zürich::00002 - ETH Zürich::00012 - Lehre und Forschung::00007 - Departemente::02140 - Dep. Inf.technologie und Elektrotechnik / Dep. of Inform.Technol. Electrical Eng.::02533 - Institut für Neuroinformatik / Institute of Neuroinformatics::09699 - Indiveri, Giacomo / Indiveri, Giacomo
ethz.date.deposited
2022-01-27T18:15:07Z
ethz.source
BATCH
ethz.eth
yes
en_US
ethz.availability
Open access
en_US
ethz.rosetta.installDate
2022-02-16T06:39:21Z
ethz.rosetta.lastUpdated
2022-03-29T18:53:44Z
ethz.rosetta.versionExported
true
ethz.COinS
ctx_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.atitle=Online%20Training%20of%20Spiking%20Recurrent%20Neural%20Networks%20with%20Phase-Change%20Memory%20Synapses&rft.jtitle=arXiv&rft.date=2021-08-25&rft.spage=2108.01804&rft.au=Demira%C4%9F,%20Yi%C4%9Fit&Frenkel,%20Charlotte&Payvand,%20Melika&Indiveri,%20Giacomo&rft.genre=preprint&
Files in this item
Publication type
-
Working Paper [5332]