Identyfikatory
Warianty tytułu
Języki publikacji
Abstrakty
The iterative inversion of neural networks has been used in solving problems of adaptive control due to its good performance of information processing. In this paper an iterative inversion neural network with L₂ penalty term has been presented trained by using the classical gradient descent method. We mainly focus on the theoretical analysis of this proposed algorithm such as monotonicity of error function, boundedness of input sequences and weak (strong) convergence behavior. For bounded property of inputs, we rigorously proved that the feasible solutions of input are restricted in a measurable field. The weak convergence means that the gradient of error function with respect to input tends to zero as the iterations go to infinity while the strong convergence stands for the iterative sequence of input vectors convergence to a fixed optimal point.
Wydawca
Czasopismo
Rocznik
Tom
Strony
85--98
Opis fizyczny
Bibliogr. 21 poz.
Twórcy
autor
- College of Science, China University of Petroleum, Qingdao 266580, China
autor
- College of Science, China University of Petroleum, Qingdao 266580, China
autor
- College of Science, China University of Petroleum, Qingdao 266580, China
autor
- Department of Electrical and Computer Engineering, University of Louisville, Louisville, KY, 40292, USA)
- Information Technology Institute, University of Social Sciences, Łodz 90-113, Poland
Bibliografia
- 1. J. M. Zurada, Introduction to artificial neural systems: West Publishing Company, St. Paul, 1992.
- 2. S. S. Haykin, Neural networks and learning machines: Pearson Education, Upper Saddle River, 2009.
- 3. P.Werbos, Beyond regression: New tools for prediction and analysis in the behavioral sciences, 1974.
- 4. D. E. Rumelhart, G. E. Hinton, and R. J. Williams, Learning representations by backpropagating errors, Cognitive modeling, 1988.
- 5. G. E. Hinton, Connectionist learning procedures, Artificial intelligence, vol. 40, no. 1, pp. 185-234, 1989.
- 6. R. Reed, Pruning algorithms-a survey, Neural Networks, IEEE Transactions on, vol. 4, no. 5, pp. 740-747, 1993.
- 7. M. Ishikawa, Structural learning with forgetting, Neural Networks, vol. 9, no. 3, pp. 509-521, 1996.
- 8. R. Setiono, A penalty-function approach for pruning feedforward neural networks, Neural computation,vol. 9, no. 1, pp. 185-204, 1997.
- 9. H. M. Shao,W.Wei., and L.-j. Liu, “Convergence of Online Gradient Method with Penalty for BP Neural Networks,” Communications in Mathematical Research vol. 26, no. 1, pp. 67-75, 2010.
- 10. J.Wang, J. Yang, andW.Wu, Convergence of cyclic and almost-cyclic learning with momentum for feedforward neural networks, Neural Networks, IEEE Transactions on, vol. 22, no. 8, pp. 1297-1306, 2011.
- 11. G. Uhlmann, Inside out: inverse problems and applications: Cambridge University Press, 2003.
- 12. M. Zamparo, S. Stramaglia, J. Banavar, and A. Maritan, Inverse problem for multivariate time series using dynamical latent variables, Physica A: Statistical Mechanics and its Applications, vol. 391, no. 11, pp. 3159-3169, 2012.
- 13.J. Kindermann, and A. Linden, Inversion of neural networks by gradient descent, Parallel computing,vol. 14, no. 3, pp. 277-286, 1990.
- 14. A. Fanni, and A. Montisci, A neural inverse problem approach for optimal design, Magnetics, IEEE Transactions on, vol. 39, no. 3, pp. 1305-1308, 2003.
- 15. Y. Hayakawa, and K. Nakajima, Design of the inverse function delayed neural network for solving combinatorial optimization problems, Neural Networks, IEEE Transactions on, vol. 21, no. 2, pp. 224-237, 2010.
- 16. D. Cherubini, A. Fanni, A. Montisci, and P. Testoni, Inversion of MLP neural networks for direct solution of inverse problems, Magnetics, IEEE Transactions on, vol. 41, no. 5, pp. 1784-1787, 2005.
- 17. E. W. Saad, and D. C. Wunsch II, Neural network explanation using inversion, Neural Networks,vol. 20, no. 1, pp. 78-93, 2007.
- 18. R. W. Duren, R. J. Marks, P. D. Reynolds, and M. L. Trumbo, Real-time neural network inversion on the SRC-6e reconfigurable computer, Neural Networks, IEEE Transactions on, vol. 18, no. 3, pp. 889-901, 2007.
- 19. S.-q. Meng, Convergence of an inverse iteration algorithm for neural networks, Dalian University of Technology, 2007.
- 20. Z.-b. Xu, H. Zhang, Y. Wang, X.-y. Chang, and Y. Liang, L 1/2 regularization, Science China-Information Sciences, vol. 53, no. 6, pp. 1159-1169, 2010.
- 21. W.Wu, Q. Fan, J. M. Zurada, J.Wang, D. Yang, and Y. Liu, Batch gradient method with smoothing L1/2 regularization for training of feedforward neural networks, Neural Networks, vol. 50, pp. 72-78, 2014.
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-c3abeb65-bf19-40f6-b55f-cb6fd8d966d4