PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Tytuł artykułu

Efficient neural network architectures and Advanced training algorithms

Autorzy
Identyfikatory
Warianty tytułu
PL
Efektywne architektury sieci neuronowych i zaawansowane algorytmy uczenia
Języki publikacji
EN
Abstrakty
EN
Advantages and disadvantages of various neural architectures are compared. It is shown that neural networks with connections across layers are significantly more powerful than popular MLP - Multi Layer Preceptron architectures. The most powerful are FCC Fully Connected Cascade (FCC) architectures. Unfortunately, most advanced training algorithms were developed only for popular MLP topologies and other much more powerful topologies are seldom used. The newly developed second order algorithm NBN is not only very fast and powerful, but it can train any neural network topologies. With the NBN algorithm it is possible to train close to optimal architectures which were not possible to train before.
PL
W pracy porównano zalety i wady rożnych topologii sieci neuronowych. Pokazano ze sieci neuronowe z połączeniami poprzez warstwy są znacznie bardziej efektywne niż popularne topologie MLP. Najbardziej efektywne są topologie FCC. Niestety większość zaawansowanych algorytmów uczenia zostało zaimplementowanych tylko dla popularnych topologii MLP i inne bardziej efektywne topologie są rzadko używane. Niedawno opracowany drugiego rzędu algorytm jest nie tylko bardzo szybki i efektywny, ale również umożliwia uczenie dowolnych topologii sieci neuronowych. NBN potrafi uczyć zbliżone do optymalnych architektury sieci neuronowych, których nie można było uczyć poprzednio.
Twórcy
  • Auburn University, USA University of Information Technology and Management, Rzeszow
Bibliografia
  • [1] Rumelhart D. E., Hinton G. E., Williams R. J.: Learning representations by back-propagating errors. Nature, vol. 323, pp. 533–536, 1986.
  • [2] Werbos P. J.: Back-propagation: Past and Future. Proceeding of International Conference on Neural Networks, San Diego, CA, 1, 343-354, 1988.
  • [3] Ferrari S., Jensenius M.: A Constrained Optimization Approach to Preserving Prior Knowledge During Incremental Training, IEEE Trans. on Neural Networks, vol. 19, no. 6, pp. 996-1009, June 2008.
  • [4] Qing Song, J.C. Spall, Yeng Chai Soh, Jie Ni: Robust Neural Network Tracking Controller Using Simultaneous Perturbation Stochastic Approximation, IEEE Trans. on Neural Networks, vol. 19, no. 5, pp. 817–835, May 2008.
  • [5] Yinyin Liu, Starzyk J. A., Zhen Zhu: Optimized Approximation Algorithm in Neural Networks Without Overfitting, IEEE Trans. on Neural Networks, vol. 19, no. 6, pp. 983–995, June 2008.
  • [6] Phansalkar V. V., Sastry P. S.: Analysis of the back-propagation algorithm with momentum, IEEE Trans. on Neural Networks, vol. 5, no. 3, pp. 505–506, March 1994.
  • [7] Riedmiller M., Braun H.: A direct adaptive method for faster backpropagation learning: The RPROP algorithm. Proc. International Conference on Neural Networks, San Francisco, CA, 1993, pp. 586–591.
  • [8] Cheol-Taek Kim, Ju-Jang Lee: Training Two-Layered Feedforward Networks With Variable Projection Method, IEEE Trans. on Neural Networks, vol. 19, no. 2, pp. 371–375, Feb 2008.
  • [9] Ampazis N., Perantonis S. J.: Two highly efficient second-order algorithms for training feedforward networks, IEEE Trans. on Neural Networks, vol. 13, no. 5, pp. 1064–1074, May 2002.
  • [10] Wu, J.-M.: Multilayer Potts Perceptrons with Levenberg–Marquardt Learning. IEEE Trans. on Neural Networks, vol. 19, no. 12, pp. 2032–2043, Feb 2008.
  • [11] Toledo A., Pinzolas M., Ibarrola J. J., Lera G.: Improvement of the neighborhood based Levenberg-Marquardt algorithm by local adaptation of the learning coefficient, IEEE Trans. on Neural Networks, vol. 16, no. 4, pp. 988–992, April 2005.
  • [12] Wilamowski B.M., Cotton N. J., Kaynak O., Dundar G.: Computing Gradient Vector and Jacobian Matrix in Arbitrarily Connected Neural Networks, IEEE Trans. on Industrial Electronics, vol. 55, no. 10, pp. 3784–3790, Oct. 2008.
  • [13] Wilamowski B. M., Yu H.: Improved Computation in Levenberg Marquardt Training, IEEE Trans. on Neural Networks (available as preprint).
  • [14] Hagan M. T., Menhaj M. B.: Training feedforward networks with the Marquardt algorithm. IEEE Trans. on Neural Networks, vol. 5, no. 6, pp. 989–993, Nov. 1994.
  • [15] Sheng Wan, Banta L. E.: Parameter Incremental Learning Algorithm for Neural Networks, IEEE Trans. on Neural Networks, vol. 17, no. 6, pp. 1424–1438, June 2006.
  • [16] Jian-Xun Peng, Kang Li, Irwin G. W.: A New Jacobian Matrix for Optimal Learning of Single-Layer Neural Networks, IEEE Trans. on Neural Networks, vol. 19, no. 1, pp. 119–129, Jan 2008.
  • [17] Wilamowski B. M., Hunter D., Malinowski A.: Solving Parity-n Problems with Feedforward Neural Network, Proc. of the IJCNN'03 International Joint Conference on Neural Networks, pp. 2546–2551, Portland, Oregon, July 20–23, 2003.
  • [18] Wilamowski B. M.: Neural Network Architectures and Learning Algorithms: How Not to Be Frustrated with Neural Networks, IEEE Industrial Electronics Magazine, vol. 3, no. 4, pp. 56–63, Dec. 2009.
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-article-BPG8-0033-0054
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.