Distributed Representations Enable Robust Multi-Timescale Symbolic Computation in Neuromorphic Hardware


Loading...

Date

2025-01-13

Publication Type

Working Paper

ETH Bibliography

yes

Citations

Altmetric

Data

Abstract

Programming recurrent spiking neural networks (RSNNs) to robustly perform multi-timescale computation remains a difficult challenge. To address this, we describe a single-shot weight learning scheme to embed robust multi-timescale dynamics into attractor-based RSNNs, by exploiting the properties of high-dimensional distributed representations. We embed finite state machines into the RSNN dynamics by superimposing a symmetric autoassociative weight matrix and asymmetric transition terms, which are each formed by the vector binding of an input and heteroassociative outer-products between states. Our approach is validated through simulations with highly nonideal weights; an experimental closed-loop memristive hardware setup; and on Loihi 2, where it scales seamlessly to large state machines. This work introduces a scalable approach to embed robust symbolic computation through recurrent dynamics into neuromorphic hardware, without requiring parameter fine-tuning or significant platform-specific optimisation. Moreover, it demonstrates that distributed symbolic representations serve as a highly capable representation-invariant language for cognitive algorithms in neuromorphic hardware.

Publication status

published

Editor

Book title

Journal / series

Volume

Pages / Article No.

Publisher

Cornell University

Event

Edition / version

v3

Methods

Software

Geographic location

Date collected

Date created

Subject

Neural and Evolutionary Computing (cs.NE); Artificial Intelligence (cs.AI); FOS: Computer and information sciences

Organisational unit

09699 - Indiveri, Giacomo / Indiveri, Giacomo check_circle

Notes

Funding

Related publications and datasets