Neuromorphic dreaming as a pathway to efficient learning in artificial agents
OPEN ACCESS
Loading...
Author / Producer
Date
2025-12
Publication Type
Journal Article
ETH Bibliography
yes
OPEN ACCESS
Data
Rights / License
Abstract
The computational substrate of biological systems exhibits remarkable abilities to learn complex skills quickly and efficiently. Inspired by this, we implement model-based reinforcement learning using spiking neural networks directly on mixed-signal neuromorphic hardware. This approach combines energy-efficient electronic circuits with high sample efficiency through alternating online (‘awake’) and offline (‘dreaming’) learning phases. Our model features two networks: an agent network that learns from real and simulated experiences and a world model network that generates simulated experiences. We validate this by training the system to play Atari Pong. First, we establish a baseline using only real experiences. Then, by ‘dreaming’, the required real experiences decrease significantly. The network dynamics runs in real-time on the analog neuromorphic circuits, with only the readout layers implemented and trained on a computer-in-the-loop. We present results that demonstrate the robustness and potential of energy-efficient mixed-signal neuromorphic processors for real-world applications.
Permanent link
Publication status
published
External links
Editor
Book title
Journal / series
Volume
5 (4)
Pages / Article No.
44005
Publisher
IOP Publishing
Event
Edition / version
Methods
Software
Geographic location
Date collected
Date created
Subject
neuromorphic computing; spiking neural networks; model-based reinforcement learning; offline learning; sample efficiency; energy-efficient learning
