Show simple item record

dc.contributor.author
Sun, Congyi
dc.contributor.author
Sun, Haohan
dc.contributor.author
Xu, Jin
dc.contributor.author
Han, Jianing
dc.contributor.author
Wang, Xinyuan
dc.contributor.author
Wang, Xinyu
dc.contributor.author
Chen, Qinyu
dc.contributor.author
Fu, Yuxiang
dc.contributor.author
Li, Li
dc.date.accessioned
2023-03-09T16:28:05Z
dc.date.available
2022-09-23T03:03:35Z
dc.date.available
2022-09-26T06:39:17Z
dc.date.available
2023-03-09T16:28:05Z
dc.date.issued
2022-12
dc.identifier.issn
1549-8328
dc.identifier.issn
1057-7122
dc.identifier.issn
1558-0806
dc.identifier.other
10.1109/TCSI.2022.3204645
en_US
dc.identifier.uri
http://hdl.handle.net/20.500.11850/572439
dc.description.abstract
In this paper, we propose a spike-time based unsupervised learning method using spiking-timing dependent plasticity (STDP). A simplified linear STDP learning rule is proposed for the energy efficient weight updates. To reduce unnecessary computations for the input spike values, a stop mechanism of the forward pass is introduced in the forward pass. In addition, a hardware-friendly input quantization scheme is used to reduce the computational complexities in both the encoding phase and the forward pass. We construct a two-layer fully-connected spiking neuron network (SNN) based on the proposed method. Compared to general rate-based SNNs trained by STDP, the proposed method reduces the complexity of network architecture (an extra inhibitory layer is not needed) and the computations of synaptic weight updates. According to the fixed-point simulation with 9-bit synaptic weights, the proposed SNN with 6144 excitatory neurons achieves 96% of recognition accuracy on MNIST dataset without any supervision. An SNN processor that contains 384 excitatory neurons with on-chip learning capability is designed and implemented with 28 nm CMOS technology based on the proposed low complexity methods. The SNN processor achieves an accuracy of 93% on MNIST dataset. The implementation results show that the SNN processor achieves a throughput of 277.78k FPS with 0.50 mu J/inference energy consuming in inference mode, and a throughput of 211.77k FPS with 0.66 mu J/learning energy consuming in learning mode.
en_US
dc.language.iso
en
en_US
dc.publisher
IEEE
en_US
dc.subject
On-chip learning
en_US
dc.subject
spiking neural network
en_US
dc.subject
spiking-timing dependent plasticity
en_US
dc.subject
temporal coding
en_US
dc.subject
unsupervised learning
en_US
dc.title
An Energy Efficient STDP-Based SNN Architecture With On-Chip Learning
en_US
dc.type
Journal Article
dc.date.published
2022-09-13
ethz.journal.title
IEEE Transactions on Circuits and Systems I: Regular Papers
ethz.journal.volume
69
en_US
ethz.journal.issue
12
en_US
ethz.pages.start
5147
en_US
ethz.pages.end
5158
en_US
ethz.identifier.wos
ethz.identifier.scopus
ethz.publication.place
New York, NY
en_US
ethz.publication.status
published
en_US
ethz.date.deposited
2022-09-23T03:03:41Z
ethz.source
WOS
ethz.eth
yes
en_US
ethz.availability
Metadata only
en_US
ethz.rosetta.installDate
2023-03-09T16:28:06Z
ethz.rosetta.lastUpdated
2023-03-09T16:28:06Z
ethz.rosetta.versionExported
true
ethz.COinS
ctx_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.atitle=An%20Energy%20Efficient%20STDP-Based%20SNN%20Architecture%20With%20On-Chip%20Learning&rft.jtitle=IEEE%20Transactions%20on%20Circuits%20and%20Systems%20I:%20Regular%20Papers&rft.date=2022-12&rft.volume=69&rft.issue=12&rft.spage=5147&rft.epage=5158&rft.issn=1549-8328&1057-7122&1558-0806&rft.au=Sun,%20Congyi&Sun,%20Haohan&Xu,%20Jin&Han,%20Jianing&Wang,%20Xinyuan&rft.genre=article&rft_id=info:doi/10.1109/TCSI.2022.3204645&
 Search print copy at ETH Library

Files in this item

FilesSizeFormatOpen in viewer

There are no files associated with this item.

Publication type

Show simple item record