PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Powiadomienia systemowe
  • Sesja wygasła!
  • Sesja wygasła!
Tytuł artykułu

Enhancing constructive neural network performance using functionally expanded input data

Treść / Zawartość
Identyfikatory
Warianty tytułu
Języki publikacji
EN
Abstrakty
EN
Constructive learning algorithms are an efficient way to train feedforward neural networks. Some of their features, such as the automatic definition of the neural network (NN) architecture and its fast training, promote their high adaptive capacity, as well as allow for skipping the usual pre-training phase, known as model selection. However, such advantages usually come with the price of lower accuracy rates, when compared to those obtained with conventional NN learning approaches. This is, perhaps, the reason for conventional NN training algorithms being preferred over constructive NN (CoNN) algorithms. Aiming at enhancing CoNN accuracy performance and, as a result, making them a competitive choice for machine learning based applications, this paper proposes the use of functionally expanded input data. The investigation described in this paper considered six two-class CoNN algorithms, ten data domains and seven polynomial expansions. Results from experiments, followed by a comparative analysis, show that performance rates can be improved when CoNN algorithms learn from functionally expanded input data.
Rocznik
Strony
119--131
Opis fizyczny
Bibliogr. 32 poz., rys.
Twórcy
  • Department of Computer Science, UFSCar, Rodovia Washington Lu´ıs, km 235, S˜ao Carlos-SP, Brazil
  • Department of Computer Science, UFSCar & FACCAMP, S˜ao Carlos & Campo Limpo Paulista - SP, Brazil
Bibliografia
  • [1] E. Amaldi and B. Guenin, Two constructive methods for designing compact feedfoward networks of threshold units, International Journal of Neural System, vol. 8, no. 5-6, 1997, pp. 629-645.
  • [2] J. K. Anlauf, M. Bieh, The AdaTron: an adaptive perceptron algorithm, Europhysics Letters, vol. 10,1989, pp. 687-692.
  • [3] J. R. Bertini Jr. and M. C. Nicoletti, Refining constructive neural networks using functionally expanded input data, Proc. Int. Joint Conference on Neural Networks, 2015, pp. 1-8.
  • [4] J. R. Bertini Jr. and M. C. Nicoletti, A constructive neural network algorithm based on the geometric concept of barycenter of convex hull, In: Computational Intelligence: Methods and Applications, IEEE Comp. Intelligence Society, Poland, 2008, pp. 1-12.
  • [5] N. Burgess, A constructive algorithm that converges for real-valued input patterns, International Journal of Neural Systems, vol. 5, no. 1, 1994, pp. 59-66.
  • [6] S. Fahlman and C. Lebiere, The cascade correlation architecture, in Advances in Neural Information Processing Systems, vol. 2, 1990, pp. 524-532.
  • [7] S. E. Fahlman, Faster-learning variations on backpropagation: an empirical study, In: Proc. of the 1988 Connectionist Models Summer School, D. S. Touretzky, G. E. Hinton and T. J. Sejnowski (Eds.), Morgan Kaufmann, San Mateo, CA, 1988, pp. 38- 51.
  • [8] S. Fahlman and C. Lebiere, The cascade correlation architecture, in Advances in Neural Information Processing Systems, vol. 2, 1990, pp. 524-532.
  • [9] L. Franco, D. A. Elizondo and J. M. Jerez, Constructive Neural Networks, Studies in Comp. Intelligence Series, v. 258, Springer, 2010.
  • [10] M. Frean, A thermal perceptron learning rule, Neural Computation, vol. 4, 1992, pp. 946-957.
  • [11] M. Frean, The upstart algorithm: a method for constructing and training feedforward neural networks, Neural Computation, vol. 2, pp. 198-209, 1990.
  • [12] S. I. Gallant, Perceptron-based learning algorithms, IEEE Transactions on Neural Networks, vol. 1, no. 2, 1990, pp. 179-191.
  • [13] S. I. Gallant, Neural Network Learning and Expert Systems, The MIT Press, London, England, 1994.
  • [14] X. Glorot and Y. Bengio, Understanding the difficulty of training deep feedforward neural networks, In: Proc. AISTATS, 2010, pp. 249-256.
  • [15] T. Hrycej, Modular Learning in Neural Networks, A. Wiley, N. York, 1992.
  • [16] Y-C. Hu, Functional-link nets with geneticalgorithm-based learning for robust nonlinear interval regression analysis, Neurocomputing, vol. 72, 2009, pp. 1808-1816.
  • [17] W. Krauth and M. M´ezard, Learning algorithms with optimal stability in neural networks, Journal of Physics A, vol. 20, pp. 745-752, 1987.
  • [18] M. Lichman, UCI Machine Learning Repository. [Online]. Available: http://archive.ics.uci.edu/ml. Irvine, CA: University of California, School of Information and Computer Science, 2013.
  • [19] D. Martinez and D. Est`eve, The offset algorithm: building and learning method for multilayer neural networks, Europhysics Letters, vol. 18, no. 2, 1992, pp. 95-100.
  • [20] M.M´ezard and J.-P. Nadal, Learning in feedfoward networks: the tiling algorithm, Journal of Physics A: Mathematical and General, vol. 22, 1989, pp. 2191-2203.
  • [21] M. Muselli, Sequential constructive techniques, Neural Network Systems Techniques and Applications, C. Leondes (Ed.), San Diego, CA: Academic, vol. 2, 1998, pp. 81-144.
  • [22] M. C. Nicoletti and J. R. Bertini Jr., An empirical evaluation of constructive neural network algorithms in classification tasks, International Journal of Innovative Computing and Applications, vol. 1, 2007, pp. 2-13.
  • [23] M. C. Nicoletti, J. R. Bertini Jr., D. Elizondo, L. Franco and J. M. Jerez, Constructive neural network algorithms for feedforward architectures suitable for classification tasks, In: Constructive Neural Networks, Studies in Comp. Intelligence, D. Elizondo, L. Franco and J.M. Jerez, Springer, 2010, pp. 1-23.
  • [24] Y. H. Pao, Adaptive pattern recognition and neural networks, Addison-Wesley, Reading, MA, 1989.
  • [25] Y. H. Pao and Y. Takefuji, Functional-link net computing: theory, system architecture, and functionalities, Computer, vol. 25, no. 5, 1992, pp. 76-79.
  • [26] R. G. Parekh, J. Yang and V. Honavar, Constructive neural-network learning algorithms for pattern classification, IEEE Transactions on Neural Networks, vol. 11, no. 2, 2000, pp. 436-451.
  • [27] H. Poulard, Barycentric correction procedure – fast method of learning threshold unit, In: Proc. Of WCNN 95, vol. 1, 1995, pp. 710-713.
  • [28] D. E. Rumelhart, G. E. Hinton and R. J. Williams, Learning representations by back-propagating errors, Nature, vol. 323, no. 6088, 1986, pp. 533-536.
  • [29] M. R. Spiegel, Mathematical Handbook of Formulas and Tables, Schaum’s outline series, McGraw-Hill Inc., USA, 1968.
  • [30] L. Toth and T. Grosz, A comparison of deep neural network training methods for large vocabulary speech recognition, In: TSD 2013, I. Habernal and V. Matousek (Eds.), LNAI 8082, Springer, 2013, pp. 36-43.
  • [31] A. Wendmuth, Learning the unlearnable, Journal of Physics A: Mathematical and General, vol. 28, 1995, pp. 5423-5436.
  • [32] J. Yosinski, J. Clune, Y. Bengio and H. Lipson, How transferable are features in deep neural networks?, Advances in Neural Information Processing Systems 27, NIPS Foundation, 2014, pp. 3320-3328
Uwagi
PL
Opracowanie ze środków MNiSW w ramach umowy 812/P-DUN/2016 na działalność upowszechniającą naukę.
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-0fec308a-79eb-402e-a647-564e45d35b51
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.