Abstract
The interest in neuronal networks originates for a good part in the option not to construct, but to train them. The mechanisms governing synaptic modifications during such training are assumed to depend on signals locally available at the synapses. In contrast, the performance of a network is suitably measured on a global scale. Here we propose a learning rule that addresses this conflict. It is inspired by recent physiological experiments and exploits the interaction of inhibitory input and backpropagating action potentials in pyramidal neurons. This mechanism makes information on the global scale available as a local signal. As a result, several desirable features can be combined: the learning rule allows fast synaptic modifications approaching one-shot learning. Nevertheless, it leads to stable representations during ongoing learning. Furthermore, the response properties of the neurons are not globally correlated, but cover the whole stimulus space. Copyright (C) 1999 Elsevier Science Ltd.
Original language | English (US) |
---|---|
Pages (from-to) | 1-9 |
Number of pages | 9 |
Journal | Neural Networks |
Volume | 13 |
Issue number | 1 |
DOIs | |
State | Published - Jan 2000 |
Funding
It is a pleasure to thank Horace Barlow, Michele Giugliano, Richard Hahnloser, Christoph Rasche, Walter Senn, Leo van Hemmen, Paul Verschure and Adrian Whatley for comments on a previous version of this manuscript. This work has been supported by the Swiss National Science Foundation, the Boehringer Ingelheim Fund and SPP Neuroinformatics.
Keywords
- Backpropagating action potential
- Inhibition
- Learning
- Oscillation
- Receptive field
- Synchronization
- synaptic plasticity
ASJC Scopus subject areas
- Cognitive Neuroscience
- Artificial Intelligence