PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Powiadomienia systemowe
  • Sesja wygasła!
Tytuł artykułu

Optimal training strategies for locally recurrent neural networks

Autorzy
Treść / Zawartość
Identyfikatory
Warianty tytułu
Języki publikacji
EN
Abstrakty
EN
The problem of determining an optimal training schedule for locally recurrent neural network is discussed. Specifically, the proper choice of the most informative measurement data guaranteeing the reliable prediction of neural network response is considered. Based on a scalar measure of performance defined on the Fisher information matrix related to the network parameters, the problem was formulated in terms of optimal experimental design. Then, its solution can be readily achieved via adaptation of effective numerical algorithms based on the convex optimization theory. Finally, some illustrative experiments are provided to verify the presented approach.
Rocznik
Strony
103--114
Opis fizyczny
Bibliogr. 31 poz., rys.
Twórcy
autor
  • Institute of Control and Computation Engineering, University of Zielona Góra, ul. Podgórna 50, 65-246 Zielona Góra, Poland
autor
  • Institute of Control and Computation Engineering, University of Zielona Góra, ul. Podgórna 50, 65-246 Zielona Góra, Poland
Bibliografia
  • [1] A. C. Atkinson, A. N. Donev and R. Tobias, Optimum Experimental Design, with SAS, Oxford University Press, Oxford, 2007.
  • [2] M. H. Choueiki and C. A. Mount-Campbell, Training data development with the doptimality criterion, IEEE Transactions on Neural Networks, 10(l):56-63, 1999.
  • [3] V. V. Fedorov and P. Hackl, Model-Oriented Design of Experiments, Lecture Notes in Statistics, Springer-Verlag, New York, 1997.
  • [4] E. Fokoue and P. Goel, An optimal experimental design perspective on radial basis function regression, Communications in Statistics - Theory and Methods, 40(7): 1184-1195, 2011.
  • [5] K. Fukumizu, Statistical active learning in mul¬tilayer perceptrons, IEEE Transactions on Neural Networks, 11:17-26, 2000.
  • [6] G. C. Goodwin and R. L. Payne, Dynamic system identification, Experiment design and data analysis, Mathematics in Science and Engineering, Academic Press, New York, 1977.
  • [7] M. M. Gupta, L. Jin and N. Homma, Static and Dynamic Neural Networks, From Fundamentals to Advanced Theory, John Wiley & Sons, New Jersey, 2003.
  • [8] M. T. Hagan and M. B. Menhaj, Training feedforward networks with the Marquardt algorithm, IEEE Transactions on Neural Networks, 5:989-993, 1994.
  • [9] S. Issanchou and J. P. Gauchi, Computer-aided optimal designs for improving neural networks generalization, Neural Networks, 21:945-950, 2008.
  • [10] J. Kiefer and J. Wolfowitz, Optimum designs in regression problems, The Annals of Mathematical Statistics, 30:271-294, 1959.
  • [11] J. Korbicz and J.M. Kościelny, editors. Modeling, Diagnosis and Process control, Implementation in the Diaster System. Springer-Verlag, Berlin Heidelberg, 2010.
  • [12] J. Korbicz, J.M. Kościelny, Z. Kowalczuk, and W. Cholewa, editors, Fault Diagnosis, Models, Artificial Intelligence, Applications. Springer- Verlag, Berlin Heidelberg, 2004.
  • [13] T. Marcu, L. Mirea, and P. M. Frank, Development of dynamical neural networks with application to observer based fault detection and isolation, International Journal of Applied Mathematics and Computer Science, 9(3):547-570, 1999.
  • [14] K. Patan, Stability analysis and the stabilization of a class of discrete-time dynamic neural networks, IEEE Transactions on Neural Networks, 18:660-673, 2007.
  • [15] K. Patan, Artificial Neural Networks for the Modelling and Fault Diagnosis of Technical Processes, Lecture Notes in Control and Information Sciences, SpringerVerlag, Berlin, 2008.
  • [16] K. Patan and M. Patan, Selection of training se-quences for locally recurrent neural network train-ing, In K. Malinowski and L. Rutkowski, editors, Recent Advances in Control and Automation, pages 252-262, Academic Publishing House, EXIT, Warsaw, 2008.
  • [17] K. Patan and M. Patan, Corrigendum to stability analysis and the stabilization of a class of discrete-time dynamic neural networks, IEEE Transactions on Neural Networks, 20:547-548, 2009.
  • [18] K. Patan and M. Patan, Selection of training data for locally recurrent neural network, In Proc. 20th Int. Conference on Artificial Neural Networks, ICANN 2010, Thesaloniki, Greece, 2010, Pub¬lished on CD-ROM.
  • [19] M. Patan, Optimal Observation Strategies for Pa-rameter Estimation of Distributed Systems, volume 5 of Lecture Notes in Control and Computer Science, Zielona Göra University Press, Zielona Göra, Poland, 2004.
  • [20] M. Patan and K. Patan, Optimal observation strategies for model-based fault detection in dis¬tributed systems. International Journal of Control, 78(18): 1497-1510, 2005.
  • [21] A. Päzman, Foundations of Optimum Experimen¬tal Design, Mathematics and Its Applications, D. Reidel Publishing Company, Dordrecht, 1986.
  • [22] A. Päzman, Nonlinear Statistical Models, Kluwer, Dordrecht, 1993.
  • [23] E. Rafajowicz, Optimum choice of moving sensor trajectories for distributed parameter system identification, International Journal of Control, 43(5):1441-1451, 1986.
  • [24] K. Sung and P. Niyogi, Active learning the weights of a RBF network, In Proc. IEEE Workshop on Neural Networks for Signal Processing, Cam¬bridge, MA, USA, 40-47, 1995.
  • [25] A. Ch. Tsoi and A. D. Back, Locally recurrent globally feedforward networks: A critical review of architectures, IEEE Transactions on Neural Net¬works, 5:229-239, 1994.
  • [26] D. Ucinski, Optimal selection of measurement lo-cations for parameter estimation in distributed processes, International Journal of Applied Math¬ematics and Computer Science, 10(2):357-379, 2000.
  • [27] D. Uciñski, Optimal Measurement Methods for Distributed Parameter System. Identification, CRC Press, Boca Raton, 2005.
  • [28] M. van de Wal and B. de Jager, A review of methods for input/output selection, Automática, 37:487-510, 2001.
  • [29] E. Walter and L. Pronzato, Identification of Para-metric Models from Experimental Data. Springer, London, 1997.
  • [30] M. Witczak, Toward the training of feedforward neural networks with the D-optimum input se-quence, IEEE Transactions on Neural Networks, 17:357-373,2006.
  • [31] M. B. Zarrop and G. C. Goodwin, Comments on optimal inputs for system identification, IEEE Transactions on Automatic Control, 20(2):299-300, 1975.
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-2c89fc5c-e8eb-4ed9-b430-f5d121ff0d1e
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.