PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Tytuł artykułu

Self-assimilation for solving excessive information acquisition in potential learning

Treść / Zawartość
Identyfikatory
Warianty tytułu
Języki publikacji
EN
Abstrakty
EN
The present paper aims to propose a new computational method for potential learning to improve generalization and interpretation. Potential learning has been proposed to simplify the computational procedures of information maximization and to specify which neurons should be fired. However, it is often the case that potential learning sometimes absorbs too much information content on input patterns in the early stage of learning, which tends to degrade generalization performance. This can be solved by making potential learning as slow as possible. Accordingly, we here propose a procedure called “self-assimilation” in which connection weights are accentuated by their characteristics observed in the specific learning step. This makes it possible to predict future connection weights in the early stage of learning. Thus, it is possible to improve generalization by slow learning and at the same time to improve the interpretation of connection weights via the enhanced characteristics of the connection weights. The method was applied to an artificial data set, as well as a real data set of counter services at a local government office in the Tokyo metropolitan area. The results show that improved generalization was observed by making learning as slow as possible. In addition, the number of strong connection weights became smaller for better interpretation by self-assimilation.
Rocznik
Strony
5--29
Opis fizyczny
Bibliogr. 20 poz., rys.
Twórcy
autor
  • IT Education Center, Tokai University 4-1-1 Kitakaname, Hiratsuka, Kanagawa 259-1292, Japan
autor
  • Department of Politics and Economics, Tokai University 4-1-1 Kitakaname, Hiratsuka, Kanagawa 259-1292, Japan
Bibliografia
  • [1] R. Linsker, Self-organization in a perceptual network, Computer, vol. 21, no. 3, pp. 105–117, 1988.
  • [2] R. Linsker, How to generate ordered maps by maximizing the mutual information between input and output signals, Neural computation, vol. 1, no. 3, pp. 402–411, 1989.
  • [3] R. Linsker, Local synaptic learning rules suffice to maximize mutual information in a linear network, Neural Computation, vol. 4, no. 5, pp. 691–702, 1992.
  • [4] R. Linsker, Improved local learning rule for information maximization and related applications, Neural networks, vol. 18, no. 3, pp. 261–265, 2005.
  • [5] G. Deco, W. Finnoff, and H. Zimmermann, Unsupervised mutual information criterion for elimination of overtraining in supervised multilayer networks, Neural Computation, vol. 7, no. 1, pp. 86–107, 1995.
  • [6] G. Deco and D. Obradovic, An informationtheoretic approach to neural computing, Springer Science & Business Media, 2012.
  • [7] H. B. Barlow, Unsupervised learning, Neural computation, vol. 1, no. 3, pp. 295–311, 1989.
  • [8] H. B. Barlow, T. P. Kaushal, and G. J. Mitchison, Finding minimum entropy codes, Neural Computation, vol. 1, no. 3, pp. 412–423, 1989.
  • [9] J. J. Atick, Could information theory provide an ecological theory of sensory processing?, Network: Computation in neural systems, vol. 3, no. 2, pp. 213–251, 1992.
  • [10] Z. Nenadic, Information discriminant analysis: Feature extraction with an information-theoretic objective, Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 29, no. 8, pp. 1394–1407, 2007.
  • [11] J. C. Principe, D. Xu, and J. Fisher, Information theoretic learning, Unsupervised adaptive filtering, vol. 1, pp. 265–319, 2000.
  • [12] J. C. Principe, Information theoretic learning: Renyi’s entropy and kernel perspectives, Springer Science & Business Media, 2010.
  • [13] K. Torkkola, Feature extraction by non parametric mutual information maximization, The Journal of Machine Learning Research, vol. 3, pp. 1415–1438, 2003.
  • [14] R. Kamimura, Simple and stable internal representation by potential mutual information maximization, in International Conference on Engineering Applications of Neural Networks, pp. 309–316, Springer, 2016.
  • [15] R. Kamimura, Self-organizing selective potentiality learning to detect important input neurons, in Systems, Man, and Cybernetics (SMC), 2015 IEEE International Conference on, pp. 1619–1626, IEEE, 2015.
  • [16] R. Kamimura, Collective interpretation and potential joint information maximization, in Intelligent Information Processing VIII: 9th IFIP TC 12 International Conference, IIP 2016, Melbourne, VIC, Australia, November 18-21, 2016, Proceedings, pp. 12–21, Springer, 2016.
  • [17] R. Kamimura, Repeated potentiality assimilation: Simplifying learning procedures by positive, independent and indirect operation for improving generalization and interpretation (in press), in Proc. Of IJCNN-2016, (Vancouver), 2016.
  • [18] R. Kamimura and T. Kamimura, Structural information and linguistic rule extraction, in Proceedings of ICONIP, pp. 720–726, 2000.
  • [19] R. Kamimura, T. Kamimura, and O. Uchida, Flexible feature discovery and structural information control, Connection science, vol. 13, no. 4, pp. 323–347, 2001.
  • [20] R. Kamimura, Information-theoretic competitive learning with inverse euclidean distance output units,” Neural processing letters, vol. 18, no. 3, pp. 163–204, 2003.
Uwagi
PL
Opracowanie rekordu w ramach umowy 509/P-DUN/2018 ze środków MNiSW przeznaczonych na działalność upowszechniającą naukę (2018).
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-3d3d675c-822c-4c31-8aa4-96c219539a68
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.