Metadata only
Date
2022-06Type
- Report
ETH Bibliography
yes
Altmetrics
Abstract
This work establishes that a physical system can perform statistical learning without gradient computations, via an \emph{Agnostic Equilibrium Propagation} (AEqprop) procedure that combines energy minimization, homeostatic control, and nudging towards the correct response. In AEqprop, the specifics of the system do not have to be known: the procedure is based only on external manipulations, and produces a stochastic gradient descent without explicit gradient computations. Thanks to nudging, the system performs a true, order-one gradient step for each training sample, in contrast with order-zero methods like reinforcement or evolutionary strategies, which rely on trial and error. This procedure considerably widens the range of potential hardware for statistical learning to any system with enough controllable parameters, even if the details of the system are poorly known. AEqprop also establishes that in natural (bio)physical systems, genuine gradient-based statistical learning may result from generic, relatively simple mechanisms, without backpropagation and its requirement for analytic knowledge of partial derivatives. Show more
Publication status
publishedExternal links
Journal / series
SAM Research ReportVolume
Publisher
Seminar for Applied Mathematics, ETH ZurichOrganisational unit
03851 - Mishra, Siddhartha / Mishra, Siddhartha
More
Show all metadata
ETH Bibliography
yes
Altmetrics