The paper proposes a non-iterative training algorithm for a power efficient SNN classifier for applications in self-learning systems. The approach uses mechanisms of preprocessing of signals from sensory neurons typical of a thalamus in a diencephalon. The algorithm concept is based on a cusp catastrophe model and on training by routing. The algorithm guarantees a zero dispersion of connection weight values across the entire network, which is particularly important in the case of hardware implementation based on programmable logic devices. Due to non-iterative mechanisms inspired by training methods for associative memories, the approach makes it possible to estimate the capacity of the network and required hardware resources. The trained network shows resistance to the phenomenon of catastrophic forgetting. Low complexity of the algorithm makes in-situ hardware training possible without using power-hungry accelerators. The paper compares the complexities of hardware implementations of the algorithm with the classic STDP and conversion methods. The basic application of the algorithm is an autonomous agent equipped with a vision system and based on a classic FPGA device.
2
Dostęp do pełnego tekstu na zewnętrznej witrynie WWW
This paper addresses the problem of effective processing using third generation neural networks. The article features two new models of spiking neurons based on the cusp catastrophe theory. The effectiveness of the models is demonstrated with an example of a network composed of three neurons solving the problem of linear inseparability of the XOR function. The proposed solutions are dedicated to hardware implementation using the Edge computing strategy. The paper presents simulation results and outlines further research direction in the field of practical applications and implementations using nanometer CMOS technologies and the current processing mode.
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.