Journal: Frontiers in Computational Neuroscience

Loading...

Abbreviation

Front. comput. neurosci.

Publisher

Frontiers Media

Journal Volumes

ISSN

1662-5188

Description

Search Results

Publications 1 - 10 of 24
  • Dazzi, Martino; Sebastian, Abu; Benini, Luca; et al. (2021)
    Frontiers in Computational Neuroscience
    In-memory computing (IMC) is a non-von Neumann paradigm that has recently established itself as a promising approach for energy-efficient, high throughput hardware for deep learning applications. One prominent application of IMC is that of performing matrix-vector multiplication in (Formula presented.) time complexity by mapping the synaptic weights of a neural-network layer to the devices of an IMC core. However, because of the significantly different pattern of execution compared to previous computational paradigms, IMC requires a rethinking of the architectural design choices made when designing deep-learning hardware. In this work, we focus on application-specific, IMC hardware for inference of Convolution Neural Networks (CNNs), and provide methodologies for implementing the various architectural components of the IMC core. Specifically, we present methods for mapping synaptic weights and activations on the memory structures and give evidence of the various trade-offs therein, such as the one between on-chip memory requirements and execution latency. Lastly, we show how to employ these methods to implement a pipelined dataflow that offers throughput and latency beyond state-of-the-art for image classification tasks.
  • Binas, Jonathan; Rutishauser, Ueli; Indiveri, Giacomo; et al. (2014)
    Frontiers in Computational Neuroscience
    Winner-Take-All (WTA) networks are recurrently connected populations of excitatory and inhibitory neurons that represent promising candidate microcircuits for implementing cortical computation. WTAs can perform powerful computations, ranging from signal-restoration to state-dependent processing. However, such networks require fine-tuned connectivity parameters to keep the network dynamics within stable operating regimes. In this article, we show how such stability can emerge autonomously through an interaction of biologically plausible plasticity mechanisms that operate simultaneously on all excitatory and inhibitory synapses of the network. A weight-dependent plasticity rule is derived from the triplet spike-timing dependent plasticity model, and its stabilization properties in the mean-field case are analyzed using contraction theory. Our main result provides simple constraints on the plasticity rule parameters, rather than on the weights themselves, which guarantee stable WTA behavior. The plastic network we present is able to adapt to changing input conditions, and to dynamically adjust its gain, therefore exhibiting self-stabilization mechanisms that are crucial for maintaining stable operation in large networks of interconnected subunits. We show how distributed neural assemblies can adjust their parameters for stable WTA function autonomously while respecting anatomical constraints on neural wiring.
  • Zubler, Frédéric; Douglas, Rodney (2009)
    Frontiers in Computational Neuroscience
  • Meier, Florian; Dang-Nhu, Raphaël; Steger, Angelika (2020)
    Frontiers in Computational Neuroscience
    Natural brains perform miraculously well in learning new tasks from a small number of samples, whereas sample efficient learning is still a major open problem in the field of machine learning. Here, we raise the question, how the neural coding scheme affects sample efficiency, and make first progress on this question by proposing and analyzing a learning algorithm that uses a simple reinforce-type plasticity mechanism and does not require any gradients to learn low dimensional mappings. It harnesses three bio-plausible mechanisms, namely, population codes with bell shaped tuning curves, continous attractor mechanisms and probabilistic synapses, to achieve sample efficient learning. We show both theoretically and by simulations that population codes with broadly tuned neurons lead to high sample efficiency, whereas codes with sharply tuned neurons account for high final precision. Moreover, a dynamic adaptation of the tuning width during learning gives rise to both, high sample efficiency and high final precision. We prove a sample efficiency guarantee for our algorithm that lies within a logarithmic factor from the information theoretical optimum. Our simulations show that for low dimensional mappings, our learning algorithm achieves comparable sample efficiency to multi-layer perceptrons trained by gradient descent, although it does not use any gradients. Furthermore, it achieves competitive sample efficiency in low dimensional reinforcement learning tasks. From a machine learning perspective, these findings may inspire novel approaches to improve sample efficiency. From a neuroscience perspective, these findings suggest sample efficiency as a yet unstudied functional role of adaptive tuning curve width.
  • Cohn, Brian A.; Szedlák, May; Gärtner, Bernd; et al. (2018)
    Frontiers in Computational Neuroscience
    We present Feasibility Theory, a conceptual and computational framework to unify today's theories of neuromuscular control. We begin by describing how the musculoskeletal anatomy of the limb, the need to control individual tendons, and the physics of a motor task uniquely specify the family of all valid muscle activations that accomplish it (its ‘feasible activation space'). For our example of producing static force with a finger driven by seven muscles, computational geometry characterizes—in a complete way—the structure of feasible activation spaces as 3-dimensional polytopes embedded in 7-D. The feasible activation space for a given task is the landscape where all neuromuscular learning, control, and performance must occur. This approach unifies current theories of neuromuscular control because the structure of feasible activation spaces can be separately approximated as either low-dimensional basis functions (synergies), high-dimensional joint probability distributions (Bayesian priors), or fitness landscapes (to optimize cost functions).
  • Parameters for burst detection
    Item type: Journal Article
    Bakkum, Douglas J.; Radivojevic, Milos; Frey, Urs; et al. (2014)
    Frontiers in Computational Neuroscience
    Bursts of action potentials within neurons and throughout networks are believed to serve roles in how neurons handle and store information, both in vivo and in vitro. Accurate detection of burst occurrences and durations are therefore crucial for many studies. A number of algorithms have been proposed to do so, but a standard method has not been adopted. This is due, in part, to many algorithms requiring the adjustment of multiple ad-hoc parameters and further post-hoc criteria in order to produce satisfactory results. Here, we broadly catalog existing approaches and present a new approach requiring the selection of only a single parameter: the number of spikes N comprising the smallest burst to consider. A burst was identified if N spikes occurred in less than T ms, where the threshold T was automatically determined from observing a probability distribution of inter-spike-intervals. Performance was compared vs. different classes of detectors on data gathered from in vitro neuronal networks grown over microelectrode arrays. Our approach offered a number of useful features including: a simple implementation, no need for ad-hoc or post-hoc criteria, and precise assignment of burst boundary time points. Unlike existing approaches, detection was not biased toward larger bursts, allowing identification and analysis of a greater range of neuronal and network dynamics.
  • Diehl, Peter U.; Martel, Julien; Buhmann, Jakob; et al. (2018)
    Frontiers in Computational Neuroscience
  • Einarsson, Hafsteinn; Gauy, Marcelo M.; Lengler, Johannes; et al. (2017)
    Frontiers in Computational Neuroscience
    Hebbian changes of excitatory synapses are driven by and enhance correlations between pre- and postsynaptic neuronal activations, forming a positive feedback loop that can lead to instability in simulated neural networks. Because Hebbian learning may occur on time scales of seconds to minutes, it is conjectured that some form of fast stabilization of neural firing is necessary to avoid runaway of excitation, but both the theoretical underpinning and the biological implementation for such homeostatic mechanism are to be fully investigated. Supported by analytical and computational arguments, we show that a Hebbian spike-timing-dependent metaplasticity rule, accounts for inherently-stable, quick tuning of the total input weight of a single neuron in the general scenario of asynchronous neural firing characterized by UP and DOWN states of activity.
  • Alessandro, Cristiano; Carbajal, Juan P.; d'Avella, Andrea (2014)
    Frontiers in Computational Neuroscience
    Analyses of experimental data acquired from humans and other vertebrates have suggested that motor commands may emerge from the combination of a limited set of modules. While many studies have focused on physiological aspects of this modularity, in this paper we propose an investigation of its theoretical foundations. We consider the problem of controlling a planar kinematic chain, and we restrict the admissible actuations to linear combinations of a small set of torque profiles (i.e., motor synergies). This scheme is equivalent to the time-varying synergy model, and it is formalized by means of the dynamic response decomposition (DRD). DRD is a general method to generate open-loop controllers for a dynamical system to solve desired tasks, and it can also be used to synthesize effective motor synergies. We show that a control architecture based on synergies can greatly reduce the dimensionality of the control problem, while keeping a good performance level. Our results suggest that in order to realize an effective and low-dimensional controller, synergies should embed features of both the desired tasks and the system dynamics. These characteristics can be achieved by defining synergies as solutions to a representative set of task instances. The required number of synergies increases with the complexity of the desired tasks. However, a possible strategy to keep the number of synergies low is to construct solutions to complex tasks by concatenating synergy-based actuations associated to simple point-to-point movements, with a limited loss of performance. Ultimately, this work supports the feasibility of controlling a non-linear dynamical systems by linear combinations of basic actuations, and illustrates the fundamental relationship between synergies, desired tasks and system dynamics.
  • Zubler, Frédéric; Hauri, Andreas; Pfister, Sabina; et al. (2011)
    Frontiers in Computational Neuroscience
    Biological systems are based on an entirely different concept of construction than human artifacts. They construct themselves by a process of self-organization that is a systematic spatio-temporal generation of, and interaction between, various specialized cell types. We propose a framework for designing gene-like codes for guiding the self-construction of neural networks. The description of neural development is formalized by defining a set of primitive actions taken locally by neural precursors during corticogenesis. These primitives can be combined into networks of instructions similar to biochemical pathways, capable of reproducing complex developmental sequences in a biologically plausible way. Moreover, the conditional activation and deactivation of these instruction networks can also be controlled by these primitives, allowing for the design of a “genetic code” containing both coding and regulating elements. We demonstrate in a simulation of physical cell development how this code can be incorporated into a single progenitor, which then by replication and differentiation, reproduces important aspects of corticogenesis.
Publications 1 - 10 of 24