PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Tytuł artykułu

Convergence Analysis of An Improved Extreme Learning Machine Based on Gradient Descent Method

Treść / Zawartość
Identyfikatory
Warianty tytułu
Języki publikacji
EN
Abstrakty
EN
Extreme learning machine (ELM) is an efficient algorithm, but it requires more hidden nodes than the BP algorithms to reach the matched performance. Recently, an efficient learning algorithm, the upper-layer-solution-unaware algorithm (USUA), is proposed for the single-hidden layer feed-forward neural network. It needs less number of hidden nodes and testing time than ELM. In this paper, we mainly give the theoretical analysis for USUA. Theoretical results show that the error function monotonously decreases in the training procedure, the gradient of the error function with respect to weights tends to zero (the weak convergence), and the weight sequence goes to a fixed point (the strong convergence) when the iterations approach positive infinity. An illustrated simulation has been implemented on the MNIST database of handwritten digits which effectively verifies the theoretical results.
Rocznik
Strony
5--15
Opis fizyczny
Bibliogr. 7 poz., rys.
Twórcy
autor
  • School of Mathematical Sciences, Dalian University of Technology, Dalian 116024, China
  • College of Science, China University of Petroleum (Huadong), Qingdao 266580, China
autor
  • School of Mathematical Sciences, Dalian University of Technology, Dalian 116024, China
autor
  • College of Science, China University of Petroleum (Huadong), Qingdao 266580, China
autor
  • College of Information and Control Engineering, China University of Petroleum (Huadong), Qingdao 266580, China
autor
  • College of Science, China University of Petroleum (Huadong), Qingdao 266580, China
Bibliografia
  • 1. Werbos, P. J., 1974, Beyond regression: new tools for prediction and analysis in the behavioral sciences, Ph.D. thesis, Harvard University, Cambridge, MA
  • 2. Rumelhart D. E., Hinton G. E., Williams R. J., 1986, Learning representations by back-propagating errors, Nature, Vol. 323, pp. 533-536
  • 3. J.H. Goodband, O.C.L. Haas, J.A. Mills, 2008, A comparison of neural network approaches for on-line prediction in IGRT, Medical Physics, Vol. 35, No. 3, pp. 1113–1122
  • 4. Huang, G.-B., Zhu, Q.-Y., Siew, C.-K., 2006, Extreme learning machine: theory and applications, Neurocomputing, Vol. 70, No. 1-3, pp. 489-501
  • 5. D. Yu and L. Deng, 2012, Efficient and effective algorithms for training single-hidden-layer neural networks, Pattern Recognition Letters, Vol. 33, No. 5, pp. 554–558
  • 6. Y. LeCun, L. Bottou, Y. Bengio, P. Haffner, 1998, Gradient-based learning applied to document recognition, Proceedings of the IEEE, Vol. 86, No. 11, pp. 2278-23247.
  • 7. Y. Yuan, W. Sun, 2001, Optimization Theory and Methods, Science Press, Beijing
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-5fad60da-73d2-42c4-a588-3e7107f716ff
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.