PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Tytuł artykułu

A dynamic model of classifier competence based on the local fuzzy confusion matrix and the random reference classifier

Treść / Zawartość
Identyfikatory
Warianty tytułu
Języki publikacji
EN
Abstrakty
EN
Nowadays, multiclassifier systems (MCSs) are being widely applied in various machine learning problems and in many different domains. Over the last two decades, a variety of ensemble systems have been developed, but there is still room for improvement. This paper focuses on developing competence and interclass cross-competence measures which can be applied as a method for classifiers combination. The cross-competence measure allows an ensemble to harness pieces of information obtained from incompetent classifiers instead of removing them from the ensemble. The cross-competence measure originally determined on the basis of a validation set (static mode) can be further easily updated using additional feedback information on correct/incorrect classification during the recognition process (dynamic mode). The analysis of computational and storage complexity of the proposed method is presented. The performance of the MCS with the proposed cross-competence function was experimentally compared against five reference MCSs and one reference MCS for static and dynamic modes, respectively. Results for the static mode show that the proposed technique is comparable with the reference methods in terms of classification accuracy. For the dynamic mode, the system developed achieves the highest classification accuracy, demonstrating the potential of the MCS for practical applications when feedback information is available.
Rocznik
Strony
175--189
Opis fizyczny
Bibliogr. 47 poz., rys., tab., wykr.
Twórcy
autor
  • Department of Systems and Computer Networks, Wrocław University Technology, Wybrzeże Wyspiańskiego 27, 50-370 Wrocław, Poland
autor
  • Department of Systems and Computer Networks, Wrocław University Technology, Wybrzeże Wyspiańskiego 27, 50-370 Wrocław, Poland
Bibliografia
  • [1] Bache, K. and Lichman, M. (2013). UCI machine learning repository, http://archive.ics.uci.edu/ml.
  • [2] Berger, J.O. and Berger, J.O. (1985). Statistical Decision Theory and Bayesian Analysis, Springer-Verlag, New York, NY.
  • [3] Bishop, C. (1995). Neural Networks for Pattern Recognition, Clarendon Press/Oxford University Press, Oxford/New York, NY.
  • [4] Blum, A. (1998). On-line algorithms in machine learning, in A. Fiat and G.J.Woeginger (Eds.), Developments from a June 1996 Seminar on Online Algorithms: The State of the Art, Springer-Verlag, London, pp. 306–325.
  • [5] Breiman, L. (1996). Bagging predictors, Machine Learning 24(2): 123–140.
  • [6] Breiman, L., Friedman, J., Olshen, R. and Stone, C. (1984). Classification and Regression Trees, Wadsworth and Brooks, Monterey, CA.
  • [7] Cover, T. and Hart, P. (1967). Nearest neighbor pattern classification, IEEE Transactions on Information Theory 13(1): 21–27, DOI:10.1109/TIT.1967.1053964.
  • [8] Dai, Q. (2013). A competitive ensemble pruning approach based on cross-validation technique, Knowledge-Based Systems 37(9): 394–414, DOI: 10.1016/j.knosys.2012.08.024.
  • [9] Demšar, J. (2006). Statistical comparisons of classifiers over multiple data sets, The Journal of Machine Learning Research 7: 1–30.
  • [10] Devroye, L., Györfi, L. and Lugosi, G. (1996). A Probabilistic Theory of Pattern Recognition, Springer, New York, NY.
  • [11] Didaci, L., Giacinto, G., Roli, F. and Marcialis, G.L. (2005). A study on the performances of dynamic classifier selection based on local accuracy estimation, Pattern Recognition 38(11): 2188–2191.
  • [12] Dietterich, T.G. (2000). Ensemble methods in machine learning, Proceedings of the 1st International Workshop on Multiple Classifier Systems, MCS’00, Cagliari, Italy, pp. 1–15.
  • [13] Dunn, O.J. (1961). Multiple comparisons among means, Journal of the American Statistical Association 56(293): 52–64.
  • [14] Fraz, M.M., Remagnino, P., Hoppe, A., Uyyanonvara, B., Rudnicka, A.R., Owen, C.G. and Barman, S. (2012). An ensemble classification-based approach applied to retinal blood vessel segmentation, IEEE Transactions on Biomedical Engineering 59(9): 2538–2548.
  • [15] Freund, Y. and Shapire, R. (1996). Experiments with a new boosting algorithm, Machine Learning: Proceedings of the 13th International Conference, Bari, Italy, pp. 148–156.
  • [16] Friedman, M. (1940). A comparison of alternative tests of significance for the problem of m rankings, The Annals of Mathematical Statistics 11(1): 86–92, DOI: 10.2307/2235971.
  • [17] Gama, J. (2010). Knowledge Discovery from Data Streams, 1st Edn., Chapman & Hall/CRC, London.
  • [18] Giacinto, G. and Roli, F. (2001). Dynamic classifier selection based on multiple classifier behaviour, Pattern Recognition 34(9): 1879–1881.
  • [19] Holm, S. (1979). A simple sequentially rejective multiple test procedure, Scandinavian Journal of Statistics 6(2): 65–70.
  • [20] Hsieh, N.-C. and Hung, L.-P. (2010). A data driven ensemble classifier for credit scoring analysis, Expert systems with Applications 37(1): 534–545.
  • [21] Huenupán, F., Yoma, N.B., Molina, C. and Garretón, C. (2008). Confidence based multiple classifier fusion in speaker verification, Pattern Recognition Letters 29(7): 957–966.
  • [22] Jurek, A., Bi, Y., Wu, S. and Nugent, C. (2013). A survey of commonly used ensemble-based classification techniques, The Knowledge Engineering Review 29(5): 551–581, DOI: 10.1017/s0269888913000155.
  • [23] Kittler, J. (1998). Combining classifiers: A theoretical framework, Pattern Analysis and Applications 1(1): 18–27.
  • [24] Ko, A.H., Sabourin, R. and Britto, Jr., A.S. (2008). From dynamic classifier selection to dynamic ensemble selection, Pattern Recognition 41(5): 1718–1731.
  • [25] Kuncheva, L.I. (2004). Combining Pattern Classifiers: Methods and Algorithms, 1st Edn., Wiley-Interscience, New York, NY.
  • [26] Kuncheva, L.I. and Rodríguez, J.J. (2014). A weighted voting framework for classifiers ensembles, Knowledge-Based Systems 38(2): 259–275.
  • [27] Kurzynski, M. (1987). Diagnosis of acute abdominal pain using three-stage classifier, Computers in Biology and Medicine 17(1): 19–27.
  • [28] Kurzynski, M., Krysmann, M., Trajdos, P. and Wolczowski, A. (2014). Two-stage multiclassifier system with correction of competence of base classifiers applied to the control of bioprosthetic hand, IEEE International Conference on Tools with Artificial Intelligence, ICTAI 2014, Limassol, Cyprus.
  • [29] Kurzynski, M. and Wolczowski, A. (2012). Control system of bioprosthetic hand based on advanced analysis of biosignals and feedback from the prosthesis sensors, Proceedings of the 3rd International Conference on Information Technologies in Biomedicine, ITIB 12, Kamień Śląski, Poland, pp. 199–208.
  • [30] Mamoni, D. (2013). On cardinality of fuzzy sets, International Journal of Intelligent Systems and Applications 5(6): 47–52.
  • [31] Plumpton, C.O. (2014). Semi-supervised ensemble update strategies for on-line classification of FMRI data, Pattern Recognition Letters 37: 172–177.
  • [32] Plumpton, C.O., Kuncheva, L.I., Oosterhof, N.N. and Johnston, S.J. (2012). Naive random subspace ensemble with linear classifiers for real-time classification of FMRI data, Pattern Recognition 45(6): 2101–2108.
  • [33] R Core Team (2012). R: A Language and Environment for Statistical Computing, R Foundation for Statistical Computing, Vienna, http://www.R-project.org/.
  • [34] Rokach, L. (2010). Ensemble-based classifiers, Artificial Intelligence Review 33(1–2): 1–39.
  • [35] Rokach, L. and Maimon, O. (2005). Clustering methods, Data Mining and Knowledge Discovery Handbook, Springer Science + Business Media, New York, NY, pp. 321–352.
  • [36] Rousseeuw, P. (1987). Silhouettes: A graphical aid to the interpretation and validation of cluster analysis, Journal of Computational and Applied Mathematics 20(1): 53–65.
  • [37] Scholkopf, B. and Smola, A.J. (2001). Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond, MIT Press, Cambridge, MA.
  • [38] Tahir, M.A., Kittler, J. and Bouridane, A. (2012). Multilabel classification using heterogeneous ensemble of multi-label classifiers, Pattern Recognition Letters 33(5): 513–523.
  • [39] Tsoumakas, G., Katakis, I. and Vlahavas, I. (2010). Random k-labelsets for multi-label classification, IEEE Transactions on Knowledge and Data Engineering 99(1): 1079–1089.
  • [40] Valdovinos, R. and Sánchez, J. (2009). Combining multiple classifiers with dynamic weighted voting, in E. Corchado et al. (Eds.), Hybrid Artificial Intelligence Systems, Lecture Notes in Computer Science, Vol. 5572, Springer, Berlin/Heidelberg, pp. 510–516.
  • [41] Ward, J. (1963). Hierarchical grouping to optimize an objective function, Journal of the American Statistical Association 58(301): 236–244.
  • [42] Wilcoxon, F. (1945). Individual comparisons by ranking methods, Biometrics Bulletin 1(6): 80–83.
  • [43] Woloszynski, T. (2013). Classifier competence based on probabilistic modeling (ccprmod.m) at Matlab central file exchange, http://www.mathworks.com/matlabcentral/fileexchange/28391-a-probabilistic-model-of-classifier-competence.
  • [44] Woloszynski, T. and Kurzynski, M. (2011). A probabilistic model of classifier competence for dynamic ensemble selection, Pattern Recognition 44(10–11): 2656–2668.
  • [45] Woloszynski, T., Kurzynski, M., Podsiadlo, P. and Stachowiak, G.W. (2012). A measure of competence based on random classification for dynamic ensemble selection, Information Fusion 13(3): 207–213.
  • [46] Wolpert, D.H. (1992). Stacked generalization, Neural Networks 5(2): 214–259.
  • [47] Wozniak, M., Graña, M. and Corchado, E. (2014). A survey of multiple classifier systems as hybrid systems, Information Fusion 16(1): 3–17.
Uwagi
Opracowanie ze środków MNiSW w ramach umowy 812/P-DUN/2016 na działalność upowszechniającą naukę.
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-f638cd89-c0d1-4024-afa5-6436705b56ae
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.