PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Tytuł artykułu

Towards a very fast feedforward multilayer neural networks training algorithm

Treść / Zawartość
Identyfikatory
Warianty tytułu
Języki publikacji
EN
Abstrakty
EN
This paper presents a novel fast algorithm for feedforward neural networks training. It is based on the Recursive Least Squares (RLS) method commonly used for designing adaptive filters. Besides, it utilizes two techniques of linear algebra, namely the orthogonal transformation method, called the Givens Rotations (GR), and the QR decomposition, creating the GQR (symbolically we write GR + QR = GQR) procedure for solving the normal equations in the weight update process. In this paper, a novel approach to the GQR algorithm is presented. The main idea revolves around reducing the computational cost of a single rotation by eliminating the square root calculation and reducing the number of multiplications. The proposed modification is based on the scaled version of the Givens rotations, denoted as SGQR. This modification is expected to bring a significant training time reduction comparing to the classic GQR algorithm. The paper begins with the introduction and the classic Givens rotation description. Then, the scaled rotation and its usage in the QR decomposition is discussed. The main section of the article presents the neural network training algorithm which utilizes scaled Givens rotations and QR decomposition in the weight update process. Next, the experiment results of the proposed algorithm are presented and discussed. The experiment utilizes several benchmarks combined with neural networks of various topologies. It is shown that the proposed algorithm outperforms several other commonly used methods, including well known Adam optimizer.
Rocznik
Strony
181--195
Opis fizyczny
Bibliogr. 38 poz., rys.
Twórcy
  • Department of Computer Engineering, Częstochowa University of Technology, al. Armii Krajowej 36, 42-200 Częstochowa, Poland
  • Department of Computer Engineering, Częstochowa University of Technology, al. Armii Krajowej 36, 42-200 Częstochowa, Poland
  • Institute of Computer Science, AGH University of Science and Technology, 30-059 Kraków, Poland
  • Information Technology Institute, University of Social Sciences, 90-113, Łódź, Poland
  • Department of Computer and Electrical Engineering, University of Louisville, KY 40292, USA
Bibliografia
  • 1] O. Abedinia, N. Amjady, and N. Ghadimi. Solar Energy Forecasting Based on Hybrid Neural Network and Improved Metaheuristic Algorithm. Computational Intelligence, 34(1): 241–260, 2018.
  • [2] U.R. Acharya, S.L. Oh, Y. Hagiwara, J.H. Tan, and H. Adeli. Deep Convolutional neural Network for the Automated Detection and Diagnosis of Seizure Using EEG Signals. Computers in Biology and Medicine, 100: 270–278, 2018.
  • [3] I. Aizenberg, D.V. Paliy, J.M. Zurada, and J. T. Astola. Blur Identification by Multilayer Neural Network Based on Multivalued neurons. IEEE Transactions on Neural Networks, 19(5): 883–898, 2008.
  • [4] E. Angelini, G. di Tollo, and A. Roli. A Neural Network Approach for Credit Risk Evaluation. The Quarterly Review of Economics and Finance, 48(4): 733–755, 2008.
  • [5] J. Bilski. Parallel Structures for Feedforward and Dynamic Neural Networks. (In Polish) Akademicka Oficyna Wydawnicza EXIT, 2013.
  • [6] J. Bilski and A.I. Galushkin. A New Proposition of the Activation Function for Significant Improvement of Neural Networks Performance. In Artificial Intelligence and Soft Computing, volume 9602 of Lecture Notes in Computer Science, pages 35–45. Springer-Verlag Berlin Heidelberg, 2016.
  • [7] J. Bilski, B. Kowalczyk, and J.M. Żurada. Application of the Givens Rotations in the Neural Network Learning Algorithm. In Artificial Intelligence and Soft Computing, volume 9602 of Lecture Notes in Artificial Intelligence, pages 46–56. Springer-Verlag Berlin Heidelberg, 2016.
  • [8] J. Bilski and J. Smoląg. Parallel Realisation of the Recurrent Multi Layer Perceptron Learning. Artificial Intelligence and Soft Computing, Springer-Verlag Berlin Heidelberg, (LNAI 7267): 12–20, 2012.
  • [9] J. Bilski and J. Smoląg. Parallel Approach to Learning of the Recurrent Jordan Neural Network. Artificial Intelligence and Soft Computing, Springer-Verlag Berlin Heidelberg, (LNAI 7895): 32–40, 2013.
  • [10] J. Bilski and J. Smoląg. Parallel Architectures for Learning the RTRN and Elman Dynamic Neural Network. IEEE Transactions on Parallel and Distributed Systems, 26(9): 2561–2570, 2015.
  • [11] J. Bilski, J. Smoląg, and A.I. Galushkin. The Parallel Approach to the Conjugate Gradient Learning Algorithm for the Feedforward Neural Networks. In Artificial Intelligence and Soft Computing, volume 8467 of Lecture Notes in Computer Science, pages 12–21. SpringerVerlag Berlin Heidelberg, 2014.
  • [12] J. Bilski, J. Smoląg, and J.M. Żurada. Parallel Approach to the Levenberg-Marquardt Learning Algorithm for Feedforward Neural Networks. In Artificial Intelligence and Soft Computing, volume 9119 of Lecture Notes in Computer Science, pages 3–14. Springer-Verlag Berlin Heidelberg, 2015.
  • [13] Jarosław Bilski, Bartosz Kowalczyk, Alina Marchlewska, and Jacek M. Zurada. Local Levenberg-Marquardt algorithm for learning feedforwad neural networks. Journal of Artificial Intelligence and Soft Computing Research, 10(4): 299–316, 2020.
  • [14] Jarosław Bilski, Bartosz Kowalczyk, Andrzej Marjański, Michał Gandor, and Jacek Zurada. A Novel Fast Feedforward Neural Networks Training Algorithm. Journal of Artificial Intelligence and Soft Computing Research, 11(4): 287–306, 2021.
  • [15] A. Cotter, O. Shamir, N. Srebro, and K. Sridharan. Better Mini-batch Algorithms via Accelerated Gradient Methods. CoRR, abs/1106.4574, 2011.
  • [16] W. Duch, K. Swaminathan, and J. Meller. Artificial Intelligence Approaches for Rational Drug Design and Discovery. Current Pharmaceutical Design, 13(14): 1497–1508, 2007.
  • [17] John Duchi, Elad Hazan, and Yoram Singer. Adaptive Subgradient Methods for Online Learning and Stochastic Optimization. Journal 194 Jarosław Bilski, Bartosz Kowalczyk, Marek Kisiel - Dorohinicki, Agnieszka Siwocha, Jacek Żurada of Machine Learning Research, 12: 2121–2159, 07 2011.
  • [18] W.M. Gentleman. Least Squares Computations by Givens Transformations without Square Roots. IMA Journal of Applied Mathematics, 12(3): 329–336, 12 1973.
  • [19] Ghosh and Reilly. Credit Card Fraud Detection with a Neural-network. In 1994 Proceedings of the Twenty-Seventh Hawaii International Conference on System Sciences, volume 3, pages 621–630, Jan 1994.
  • [20] W. Givens. Computation of Plain Unitary Rotations Transforming a General Matrix to Triangular Form. Journal of The Society for Industrial and Applied Mathematics, 6: 26–50, 1958.
  • [21] J. Gu, Z. Wang, J. Kuen, L. Ma, A. Shahroudy, B. Shuai, T. Liu, X. Wang, G. Wang, J. Cai, and T. Chen. Recent Advances in Convolutional Neural Networks. Pattern Recognition, 77: 354–377, 2018.
  • [22] M.T. Hagan and M.B. Menhaj. Training Feedforward Networks with the Marquardt Algorithm. IEEE Transactions on Neuralnetworks, 5: 989–993, 1994.
  • [23] A. Horzyk and R. Tadeusiewicz. Selfoptimizing Neural Networks. In Fu-Liang Yin, Jun Wang, and Chengan Guo, editors, Advances in Neural Networks – ISNN 2004, pages 150–155, Berlin, Heidelberg, 2004. Springer Berlin Heidelberg.
  • [24] A. Kiełbasiński and H. Schwetlick. Numeryczna Algebra Liniowa: Wprowadzenie do Obliczeń Zautomatyzowanych. Wydawnictwa Naukowo-Techniczne, Warszawa, 1992.
  • [25] D.P. Kingma and J. Ba. Adam: A Method for Stochastic Optimization, 2014.
  • [26] Y. Li, R. Cui, Z. Li, and D. Xu. Neural Network Approximation Based Near-optimal Motion Planning with Kinodynamic Constraints Sing Rrt. IEEE Transactions on Industrial Electronics, 65(11): 8718–8729, Nov 2018.
  • [27] H. Liu, X. Mi, and Y. Li. Wind Speed Forecasting Method Based on Deep Learning Strategy Using Empirical Wavelet Transform, Long Short Term Memory Neural Network and Elman Neural Network. Energy Conversion and Management, 156: 498–514, 2018.
  • [28] M. Mazurowski, P. Habas, J. Zurada, J. Lo, J. Baker, and G. Tourassi. Training Neural Network Classifiers for Medical Decision Making: The Effects of Imbalanced Datasets on Classification Performance. Neural networks: the official journal of the International Neural Network Society, 21: 427–36, 03 2008.
  • [29] Yu. E. Nesterov. A Method for Solving the Convex Programming Problem with Convergence rate O(1/sqr(k)). In Soviet Mathematics Doklady, number 27: 372-376, 1983.
  • [30] B.T. Polyak. Some Methods of Speeding Up the Convergence of Iteration Methods. USSR Computational Mathematics and Mathematical Physics, 4(5): 1–17, 1964.
  • [31] R. Shirin. A Neural Network Approach for Retailer Risk Assessment in the Aftermarket Industry. Benchmarking: An International Journal, 26(5): 1631–1647, Jan 2019.
  • [32] A.K. Singh, S.K. Jha, and A.V. Muley. Candidates Selection Using Artificial Neural Network Technique in a Pharmaceutical Industry. In Siddhartha Bhattacharyya, Aboul Ella Hassanien, Deepak Gupta, Ashish Khanna, and Indrajit Pan, editors, International Conference on Innovative Computing and Communications, pages 359–366, Singapore, 2019. Springer Singapore.
  • [33] I. Sutskever, J. Martens, G. Dahl, and G. Hinton. On the Importance of Initialization and Momentum in Deep Learning. In Proceedings of the 30th International Conference on International Conference on Machine Learning -Volume 28, ICML’13, pages III–1139–III–1147. JMLR.org, 2013.
  • [34] R. Tadeusiewicz, L. Ogiela, and M.R. Ogiela. Cognitive Analysis Techniques in Business Planning and Decision Support Systems. In L. Rutkowski, R. Tadeusiewicz, L.A. Zadeh, and J.M. Żurada, editors, Artificial Intelligence and Soft Computing – ICAISC 2006, pages 1027–1039, Berlin, Heidelberg, 2006. Springer Berlin Heidelberg.
  • [35] K.Y. Tam and M. Kiang. Predicting Bank Failures: A Neural Network Approach. Applied Artificial Intelligence, 4(4): 265–282, 1990.
  • [36] J. Werbos. Beyond Regression: New Tools form Prediction and Analysis in the Behavioral Sciences. Harvard University, 1974.
  • [37] B.M. Wilamowski. Neural Network Architectures and Learning Algorithms. IEEE Industrial Electronics Magazine, 3(4): 56–63, 2009.
  • [38] Matthew D. Zeiler. Adadelta: An Adaptive Learning Rate Method, 2012.
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-0a3b40eb-5968-4b6e-b714-7cb622e020f8
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.