WorldWideScience

Sample records for chronotrons

  1. The chronotron: a neuron that learns to fire temporally precise spike patterns.

    Directory of Open Access Journals (Sweden)

    Răzvan V Florian

    Full Text Available In many cases, neurons process information carried by the precise timings of spikes. Here we show how neurons can learn to generate specific temporally precise output spikes in response to input patterns of spikes having precise timings, thus processing and memorizing information that is entirely temporally coded, both as input and as output. We introduce two new supervised learning rules for spiking neurons with temporal coding of information (chronotrons, one that provides high memory capacity (E-learning, and one that has a higher biological plausibility (I-learning. With I-learning, the neuron learns to fire the target spike trains through synaptic changes that are proportional to the synaptic currents at the timings of real and target output spikes. We study these learning rules in computer simulations where we train integrate-and-fire neurons. Both learning rules allow neurons to fire at the desired timings, with sub-millisecond precision. We show how chronotrons can learn to classify their inputs, by firing identical, temporally precise spike trains for different inputs belonging to the same class. When the input is noisy, the classification also leads to noise reduction. We compute lower bounds for the memory capacity of chronotrons and explore the influence of various parameters on chronotrons' performance. The chronotrons can model neurons that encode information in the time of the first spike relative to the onset of salient stimuli or neurons in oscillatory networks that encode information in the phases of spikes relative to the background oscillation. Our results show that firing one spike per cycle optimizes memory capacity in neurons encoding information in the phase of firing relative to a background rhythm.

  2. Development of a 'chronotron' for time of flight fast neutron spectrometer; Realisation d'un chronotron pour spectrometre a neutrons rapides par temps de vol

    Energy Technology Data Exchange (ETDEWEB)

    Duclos, J [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires

    1958-10-15

    A chronotron using storage circuits of a 100 channels amplitude analyser has been developed in order to measure the time of flight of fast neutrons. A time dilatation is obtained by a distribution of 20 6BN6 tubes. The width at half maximum of prompt coincidences curve is 1,6.10{sup -9} s for {beta}-{gamma} coincidences from An{sup 198} and 2.10{sup -9} s for n-{alpha} coincidences from (d, t) reaction. (author) [French] En vue de realiser un spectrometre a neutrons rapides par temps de vol, un chronotron utilisant les circuits d'enregistrement d'un selecteur d'amplitude de 100 canaux a ete construit. Une dilatation du temps est obtenue a l'aide d'une distribution de 20 lampes 6BN6. La largeur a mi-hauteur de la courbe de resolution est 1,6.10{sup -9} s pour les coincidences {beta}-{gamma} de Au{sup 198} et 2.10{sup -9} s pour les coincidences n-{alpha} de la reaction (d, t). (auteur)

  3. Differential coincidence circuit in the 10{sup -10} second region (1960); Circuit de coincidence differentiel dans le domaine de 10{sup -10} seconde (1960)

    Energy Technology Data Exchange (ETDEWEB)

    Van Zurk, R [Commissariat a l' Energie Atomique, Lab. de Physique Nucleaire, Grenoble (France).Centre d' Etudes Nucleaires; [Grenoble-1 Univ., 38 (France); [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires

    1960-07-01

    A coincidence circuit of low resolution time using the differential coincidence Bay principle is described. It uses three 6BN6 tubes ordered to chronotron structure. Results with Radiotechnique 56 AVP photomultipliers and for {sup 60}Co {gamma}-{gamma} coincidences are 4,6.10{sup -10} s (full width at half maximum) if the efficiency is {epsilon} = 40 per cent and also 7,2.10{sup -10} s if {epsilon} = 85 per cent. (author) [French] Un circuit de coincidence differentiel du type de Bay, utilise en selecteur en temps a canal mobile, a ete construit pour la mesure des periodes {gamma} et des periodes d'annihilation du positon dans differents materiaux. Il comporte trois tubes 6BN6 disposes en structure chronotron. On a utilise les nouveaux photomultiplicateurs 56 AVP avec scintillateur plastique. Avec les coincidences {gamma}-{gamma} du {sup 60}Co, on obtient 2T 4,6.10{sup -10} s avec une efficacite de 40 pour cent et 2T = 7,2.10{sup -10} s avec une efficacite de 85 pour cent. (auteur)

  4. Differential coincidence circuit in the 10-10 second region (1960)

    International Nuclear Information System (INIS)

    Van Zurk, R.; Grenoble-1 Univ., 38; Commissariat a l'Energie Atomique, Saclay

    1960-01-01

    A coincidence circuit of low resolution time using the differential coincidence Bay principle is described. It uses three 6BN6 tubes ordered to chronotron structure. Results with Radiotechnique 56 AVP photomultipliers and for 60 Co γ-γ coincidences are 4,6.10 -10 s (full width at half maximum) if the efficiency is ε = 40 per cent and also 7,2.10 -10 s if ε = 85 per cent. (author) [fr

  5. Span: spike pattern association neuron for learning spatio-temporal spike patterns.

    Science.gov (United States)

    Mohemmed, Ammar; Schliebs, Stefan; Matsuda, Satoshi; Kasabov, Nikola

    2012-08-01

    Spiking Neural Networks (SNN) were shown to be suitable tools for the processing of spatio-temporal information. However, due to their inherent complexity, the formulation of efficient supervised learning algorithms for SNN is difficult and remains an important problem in the research area. This article presents SPAN - a spiking neuron that is able to learn associations of arbitrary spike trains in a supervised fashion allowing the processing of spatio-temporal information encoded in the precise timing of spikes. The idea of the proposed algorithm is to transform spike trains during the learning phase into analog signals so that common mathematical operations can be performed on them. Using this conversion, it is possible to apply the well-known Widrow-Hoff rule directly to the transformed spike trains in order to adjust the synaptic weights and to achieve a desired input/output spike behavior of the neuron. In the presented experimental analysis, the proposed learning algorithm is evaluated regarding its learning capabilities, its memory capacity, its robustness to noisy stimuli and its classification performance. Differences and similarities of SPAN regarding two related algorithms, ReSuMe and Chronotron, are discussed.

  6. Precise-spike-driven synaptic plasticity: learning hetero-association of spatiotemporal spike patterns.

    Directory of Open Access Journals (Sweden)

    Qiang Yu

    Full Text Available A new learning rule (Precise-Spike-Driven (PSD Synaptic Plasticity is proposed for processing and memorizing spatiotemporal patterns. PSD is a supervised learning rule that is analytically derived from the traditional Widrow-Hoff rule and can be used to train neurons to associate an input spatiotemporal spike pattern with a desired spike train. Synaptic adaptation is driven by the error between the desired and the actual output spikes, with positive errors causing long-term potentiation and negative errors causing long-term depression. The amount of modification is proportional to an eligibility trace that is triggered by afferent spikes. The PSD rule is both computationally efficient and biologically plausible. The properties of this learning rule are investigated extensively through experimental simulations, including its learning performance, its generality to different neuron models, its robustness against noisy conditions, its memory capacity, and the effects of its learning parameters. Experimental results show that the PSD rule is capable of spatiotemporal pattern classification, and can even outperform a well studied benchmark algorithm with the proposed relative confidence criterion. The PSD rule is further validated on a practical example of an optical character recognition problem. The results again show that it can achieve a good recognition performance with a proper encoding. Finally, a detailed discussion is provided about the PSD rule and several related algorithms including tempotron, SPAN, Chronotron and ReSuMe.

  7. Precise-spike-driven synaptic plasticity: learning hetero-association of spatiotemporal spike patterns.

    Science.gov (United States)

    Yu, Qiang; Tang, Huajin; Tan, Kay Chen; Li, Haizhou

    2013-01-01

    A new learning rule (Precise-Spike-Driven (PSD) Synaptic Plasticity) is proposed for processing and memorizing spatiotemporal patterns. PSD is a supervised learning rule that is analytically derived from the traditional Widrow-Hoff rule and can be used to train neurons to associate an input spatiotemporal spike pattern with a desired spike train. Synaptic adaptation is driven by the error between the desired and the actual output spikes, with positive errors causing long-term potentiation and negative errors causing long-term depression. The amount of modification is proportional to an eligibility trace that is triggered by afferent spikes. The PSD rule is both computationally efficient and biologically plausible. The properties of this learning rule are investigated extensively through experimental simulations, including its learning performance, its generality to different neuron models, its robustness against noisy conditions, its memory capacity, and the effects of its learning parameters. Experimental results show that the PSD rule is capable of spatiotemporal pattern classification, and can even outperform a well studied benchmark algorithm with the proposed relative confidence criterion. The PSD rule is further validated on a practical example of an optical character recognition problem. The results again show that it can achieve a good recognition performance with a proper encoding. Finally, a detailed discussion is provided about the PSD rule and several related algorithms including tempotron, SPAN, Chronotron and ReSuMe.

  8. Supervised Learning in Spiking Neural Networks for Precise Temporal Encoding.

    Science.gov (United States)

    Gardner, Brian; Grüning, André

    2016-01-01

    Precise spike timing as a means to encode information in neural networks is biologically supported, and is advantageous over frequency-based codes by processing input features on a much shorter time-scale. For these reasons, much recent attention has been focused on the development of supervised learning rules for spiking neural networks that utilise a temporal coding scheme. However, despite significant progress in this area, there still lack rules that have a theoretical basis, and yet can be considered biologically relevant. Here we examine the general conditions under which synaptic plasticity most effectively takes place to support the supervised learning of a precise temporal code. As part of our analysis we examine two spike-based learning methods: one of which relies on an instantaneous error signal to modify synaptic weights in a network (INST rule), and the other one relying on a filtered error signal for smoother synaptic weight modifications (FILT rule). We test the accuracy of the solutions provided by each rule with respect to their temporal encoding precision, and then measure the maximum number of input patterns they can learn to memorise using the precise timings of individual spikes as an indication of their storage capacity. Our results demonstrate the high performance of the FILT rule in most cases, underpinned by the rule's error-filtering mechanism, which is predicted to provide smooth convergence towards a desired solution during learning. We also find the FILT rule to be most efficient at performing input pattern memorisations, and most noticeably when patterns are identified using spikes with sub-millisecond temporal precision. In comparison with existing work, we determine the performance of the FILT rule to be consistent with that of the highly efficient E-learning Chronotron rule, but with the distinct advantage that our FILT rule is also implementable as an online method for increased biological realism.