PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Tytuł artykułu

Concepts of learning in assembler encoding

Autorzy
Wybrane pełne teksty z tego czasopisma
Identyfikatory
Warianty tytułu
Języki publikacji
EN
Abstrakty
EN
Assembler Encoding (AE) represents Artificial Neural Network (ANN) in the form of a simple program called Assembler Encoding Program (AEP). The task of AEP is to create the so-called Network Definition Matrix (NDM) maintaining the whole information necessary to construct ANN. To generate AEPs and in consequence ANNs genetic algorithms are used. Using evolution is one of the methods to create optimal ANNs. Another method is learning. During learning parameters of ANN, e.g. weights of interneuron connections, adjust to the task performed by ANN. Usually, combining both methods accelerates generating optimal ANNs. The paper addresses the problem of simultaneous use of the evolution and learning in AE.
Rocznik
Strony
323--337
Opis fizyczny
Bibliogr. 19 poz.
Twórcy
autor
Bibliografia
  • [1] L. BAIRD: Reinforcement learning through gradient descent. PhD thesis, Carnegie Mellon University, Pittsburgh, 1999.
  • [2] P. CICHOSZ: Learning systems. WNT, Warsaw, 2000.
  • [3] C. CLAUS, C. BOUTILIER: The dynamics of reinforcement learning in cooperative multiagent systemes. Proc. 15th Nat. Conf. on Artificial Intelligence, Madison, netwoi (1998), 746-752.
  • [4] J. L. ELMAN: Learning and development in neural networks: The importance of adapti, starting small. Cognition, 48 (1993), 71-99. 495-5:
  • [5] D. FLOREANO and J. U RZELAI: Evolutionary robots with online self-organization and behavioral fitness. Neural Networks, 13 (2000), 431-443
  • [6] R. I. W. LANG: A future for dynamic neural networks. Technical report no. CYB/1/PG/RIWL/V1.0, University of Reading, UK, 2000.
  • [7] M. L. LITTMAN and C. SZEPESVARI: A generalized reinforcement-learning model: Convergence and applications. Proc. 13th Int. Conf. on Machine Learning, (1996), 310-318.
  • [8] M. L. LITTMAN: Markov games as a framework for multi-agent reinforcement learning. Proc. 11th Int. Conf. on Machine Learning, Morgan Kaufman, (1994), 157-163.
  • [9] M. L. LITTMAN: Value-function reinforcement learning in m Markov games. J. Cognitive Systems Research, 2 (2001), 55-66.
  • [10] G. F. MILLER, P. M. TODD and S. U. HEGDE: Designing neural networks using genetic algorithms. Proc. 3rd Int. Conf. on Genetic Algorithms, (1989), 379-384.
  • [11] M. POTTER: The design and analysis of a computational model of cooperative coevolution. PhD thesis, George Mason University, Fairfax, Virginia, 1997.
  • [12] M. POTTER and K. A. DE JONG: Evolving neural networks with collaborative species. In T.I. Oren, L.G. Birth (Eds.) Proc. of the 1995 Summer Computer Simulation Conf, (1995), 340-345.
  • [13] M. A. POTTER and K.A. DE JONG: A cooperative coevolutionary approach to function optimization. The Third Parallel Problem Solving From Nature, Jerusalem, Israel, (1994), 249-257.
  • [14] M. A. POTTER and K. A. DE JONG: Cooperative coevolution: An architecture for evolving coadapted subcomponents. Evolutionary Computation, 8(1), (2000), I -29.
  • [15] T. PRACZYK: Evolving co-adapted subcomponents in assembler encoding. Int. J. Applied Mathematics and Computer Science, 17(4), 2007.
  • [16] T. PRACZYK: Procedure application in assembler encoding. Archives of Control Science, 17(1), 2007, 71-91.
  • [17] T. PRACZYK: Using genetic algorithms and assembler encoding to generate neural networks. Computing and Informatics, 2008, (in press).
  • [18] J. URZELAI and D. FLOREANO: Evolution of adaptive synapses: Robots with fast adaptive behavior in new environments. Evolutionary Computation, 9(4), (2001), 495-524.
  • [19] E. YANG and D. GU: Multiagent reinforcement learning for multi-robot systems: A Survey. vhttp://citeseer.ist.psu.edu .
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-article-BSW3-0048-0003
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.