PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Tytuł artykułu

Combined classifier based on feature space partitioning

Treść / Zawartość
Identyfikatory
Warianty tytułu
Języki publikacji
EN
Abstrakty
EN
This paper presents a significant modification to the AdaSS (Adaptive Splitting and Selection) algorithm, which was developed several years ago. The method is based on the simultaneous partitioning of the feature space and an assignment of a compound classifier to each of the subsets. The original version of the algorithm uses a classifier committee and a majority voting rule to arrive at a decision. The proposed modification replaces the fairly simple fusion method with a combined classifier, which makes a decision based on a weighted combination of the discriminant functions of the individual classifiers selected for the committee. The weights mentioned above are dependent not only on the classifier identifier, but also on the class number. The proposed approach is based on the results of previous works, where it was proven that such a combined classifier method could achieve significantly better results than simple voting systems. The proposed modification was evaluated through computer experiments, carried out on diverse benchmark datasets. The results are very promising in that they show that, for most of the datasets, the proposed method outperforms similar techniques based on the clustering and selection approach.
Rocznik
Strony
855--866
Opis fizyczny
Bibliogr. 53 poz., rys., tab.
Twórcy
autor
autor
  • Department of Systems and Computer Networks, Wrocław University of Technology, Wybrzeże Wyspiańskiego 27, 50-370 Wrocław, Poland, michal.wozniak@pwr.wroc.pl
Bibliografia
  • [1] Alpaydin, E. (1999). Combined 5 x 2 cv f test for comparing supervised classification learning algorithms, Neural Computation 11(8): 1885-1892.
  • [2] Alpaydin, E. (2010). Introduction to Machine Learning, 2nd Edn., The MIT Press, London.
  • [3] Ashlock, D. (2006). Evolutionary Computation for Modeling and Optimization, 1st Edn., Springer, New York, NY.
  • [4] Baram, Y. (1998). Partial classification: The benefit of deferred decision, IEEE Transactions on Pattern Analysis and Machine Intelligence 20(8): 769-776.
  • [5] Baruque, B., Porras, S. and Corchado, E. (2011). Hybrid classification ensemble using topology-preserving clustering, New Generation Computing 29(3): 329-344.
  • [6] Biggio, B., Fumera, G. and Roli, F. (2007). Bayesian analysis of linear combiners, Proceedings of the 7th International Conference on Multiple Classifier Systems, MCS'07, Prague, Czech Republic, pp. 292-301.
  • [7] Brown, G. and Kuncheva, L. I. (2010). "Good" and "bad" diversity in majority vote ensembles, 9th InternationalWorkshop on Multiple Classifier Systems, MCS 2010, Cairo, Egypt, pp. 124-133.
  • [8] Chmaj, G., Walkowiak, K., Tarnawski, M. and Kucharzak, M. (2012). Heuristic algorithms for optimization of task allocation and result distribution in peer-to-peer computing systems, International Journal of Applied Mathematics and Computer Science 22(3): 733-748, DOI: 10.2478/v10006-012-0055-0.
  • [9] Chow, C. K. (1965). Statistical independence and threshold functions, IEEE Transactions on Electronic Computers EC-14(1): 66-68.
  • [10] Cordella, L., Foggia, P., Sansone, C., Tortorella, F. and Vento, M. (2000). A cascaded multiple expert system for verification, in J. Kittler and F. Roli (Eds.), Multiple Classifier Systems, Lecture Notes in Computer Science, Vol. 1857, Springer, Berlin/Heidelberg, pp. 330-339.
  • [11] Dietterich, T. G. and Bakiri, G. (1995). Solving multiclass learning problems via error-correcting output codes, Journal of Artificial Intelligence Research 2: 263-286.
  • [12] Duda, R. O., Hart, P. E. and Stork, D. G. (2001). Pattern Classification, 2nd Edn., Wiley, New York, NY.
  • [13] Duin, R. (2002). The combining classifier: To train or not to train?, 16th International Conference on Pattern Recognition, Quebec, Canada, Vol. 2, pp. 765-770.
  • [14] Frank, A. and Asuncion, A. (2010). UCI machine learning repository, http://archive.ics.uci.edu/ml.
  • [15] Giacinto, G., Roli, F. and Fumera, G. (2000). Design of effective multiple classifier systems by clustering of classifiers, 15th International Conference on Pattern Recognition, Barcelona, Spain, Vol. 2, pp. 160-163.
  • [16] Goebel, K. and Yan,W. (2004). Choosing classifiers for decision fusion, Proceedings of the 7th International Conference on Information Fusion, Stockholm, Sweden, pp. 563-568.
  • [17] Goldberg, D. E. (1989). Genetic Algorithms in Search, Optimization and Machine Learning, 1st Edn., Addison-Wesley Longman Publishing Co., Inc., Boston, MA.
  • [18] Hansen, L. and Salamon, P. (1990). Neural network ensembles, IEEE Transactions on Pattern Analysis and Machine Intelligence 12(10): 993 -1001.
  • [19] Ho, T. K. (1998). The random subspace method for constructing decision forests, IEEE Transactions on Pattern Analysis and Machine Intelligence 20(8): 832-844.
  • [20] Jackowski, K. and Woźniak, M. (2009). Algorithm of designing compound recognition system on the basis of combining classifiers with simultaneous splitting feature space into competence areas, Pattern Analysis and Applications 12(4): 415-425.
  • [21] Jacobs, R. A. (1995). Methods for combining experts' probability assessments, Neural Computation 7(5): 867-888.
  • [22] Jacobs, R. A., Jordan, M. I., Nowlan, S. J. and Hinton, G. E. (1991). Adaptive mixtures of local experts, Neural Computation 3(1): 79-87.
  • [23] Jain, A., Duin, R. and Mao, J. (2000). Statistical pattern recognition: A review, IEEE Transactions on Pattern Analysis and Machine Intelligence 22(1): 4-37.
  • [24] Jain, A. K., Murty, M. N. and Flynn, P. J. (1999). Data clustering: A review, ACM Computing Surveys 31(3): 264-323.
  • [25] Kacprzak, T., Walkowiak, K. and Woźniak, M. (2012). Optimization of overlay distributed computing systems for multiple classifier system-Heuristic approach, Logic Journal of the IGPL 20(4): 677-688.
  • [26] Krawczyk, B. and Woźniak, M. (2011). Designing cost-sensitive ensemble genetic approach, in R. Choras (Ed.), Image Processing and Communications Challenges 3, Advances in Intelligent and Soft Computing, Vol. 102, Springer, Berlin/Heidelberg, pp. 227-234.
  • [27] Kuncheva, L. (2000). Clustering-and-selection model for classifier combination, 4th International Conference on Knowledge-Based Intelligent Engineering Systems and Allied Technologies, Brighton, UK, Vol. 1, pp. 185-188.
  • [28] Kuncheva, L., Bezdek, J. C. and Duin, R. P. W. (2001). Decision templates for multiple classifier fusion: An experimental comparison, Pattern Recognition 34(2): 299-314.
  • [29] Kuncheva, L. I. (2004). Combining Pattern Classifiers: Methods and Algorithms, Wiley-Interscience, Hoboken, NJ.
  • [30] Kuncheva, L., Whitaker, C., Shipp, C. and Duin, R. (2003). Limits on the majority vote accuracy in classifier fusion, Pattern Analysis and Applications 6(1): 22-31.
  • [31] Kuratowski, K. and Mostowski, A. (1976). Set Theory: With An Introduction to Descriptive Set Theory, 2nd Edn., North-Holland Pub. Co., Amsterdam.
  • [32] MacQueen, J. B. (1967). Some methods for classification and analysis of multivariate observations, in L.M.L. Cam and J. Neyman (Eds.), Proceedings of the 5th Berkeley Symposium on Mathematical Statistics and Probability, Vol. 1, University of California Press, Berkeley, CA, pp. 281-297.
  • [33] Marcialis, G. L. and Roli, F. (2003). Fusion of face recognition algorithms for video-based surveillance systems, in G.L. Foresti, C.S. Regazzoni and P.K. Varshney (Eds.), Multisensor Surveillance Systems: The Fusion Perspective, Dordrecht, The Netherlands, pp. 235-250.
  • [34] Matan, O. (1996). On voting ensembles of classifiers (extended abstract), Proceedings of the AAAI-96 Workshop on Integrating Multiple Learned Models, Portland, OR, USA, pp. 84-88.
  • [35] Partridge, D. and Krzanowski, W. (1997). Software diversity: Practical statistics for its measurement and exploitation, Information and Software Technology 39(10): 707-717.
  • [36] Polikar, R. (2006). Ensemble based systems in decision making, IEEE Circuits and Systems Magazine 6(3): 21-45.
  • [37] Rastrigin, L. and Erenstein, R. H. (1981). Method of Collective Recognition, Energoizdat, Moscow.
  • [38] Ruta, D. and Gabrys, B. (2005). Classifier selection for majority voting, Information Fusion 6(1): 63-81.
  • [39] Smetek, M. and Trawinski, B. (2011). Selection of heterogeneous fuzzy model ensembles using self-adaptive genetic algorithms, New Generation Computing 29(3): 309-327.
  • [40] Srinivas, M. and Patnaik, L. M. (1994). Genetic algorithms: A survey, Computer 27(6): 17-26.
  • [41] Team, R. D. C. (2008). R: A Language and Environment for Statistical Computing, R Foundation for Statistical Computing, Vienna.
  • [42] Ting, K., Wells, J., Tan, S., Teng, S. and Webb, G. (2011). Feature-subspace aggregating: Ensembles for stable and unstable learners, Machine Learning 82(3): 375-397.
  • [43] Troć, M. and Unold, O. (2010). Self-adaptation of parameters in a learning classifier system ensemble machine, International Journal of Applied Mathematics and Computer Science 20(1): 157-174, DOI: 10.2478/v10006-010-0012-8.
  • [44] Tumer, K. and Ghosh, J. (1996). Analysis of decision boundaries in linearly combined neural classifiers, Pattern Recognition 29(2): 341-348.
  • [45] van Erp, M., Vuurpijl, L. and Schomaker, L. (2002). An overview and comparison of voting methods for pattern recognition, 8th International Workshop on Frontiers in Handwriting Recognition, Ontario, Canada, pp. 195-200.
  • [46] Walkowiak, K. (2010). Anycasting in connection-oriented computer networks: Models, algorithms and results, International Journal of Applied Mathematics and Computer Science 20(1): 207-220, DOI: 10.2478/v10006-010-0015-5.
  • [47] Wilk, T. and Woźniak, M. (2011). Complexity and multithreaded implementation analysis of one class-classifiers fuzzy combiner, in E. Corchado, M. Kurzynski and M. Wozniak (Eds.), Hybrid Artificial Intelligent Systems, Lecture Notes in Computer Science, Vol. 6679, Springer, Berlin/Heidelberg, pp. 237-244.
  • [48] Wolpert, D. H. (2001). The supervised learning no-free-lunch theorems, 6th Online World Conference on Soft Computing in Industrial Applications, pp. 25-42.
  • [49] Woods, K., Kegelmeyer Jr., W. P. and Bowyer, K. (1997). Combination of multiple classifiers using local accuracy estimates, IEEE Transactions on Pattern Analysis and Machine Intelligence 19(4): 405-410.
  • [50] Woźniak, M. (2008). Experiments on linear combiners, in E. Pietka and J. Kawa (Eds.), Information Technologies in Biomedicine, Advances in Soft Computing, Vol. 47, Springer, Berlin/Heidelberg, pp. 445-452.
  • [51] Woźniak, M. and Jackowski, K. (2009). Some remarks on chosen methods of classifier fusion based on weighted voting, in E. Corchado, X. Wu, E. Oja, A. Herrero and B. Baruque (Eds.), Hybrid Artificial Intelligence Systems, Lecture Notes in Computer Science, Vol. 5572, Springer, Berlin/Heidelberg, pp. 541-548.
  • [52] Woźniak, M. and Zmyślony, M. (2010). Combining classifiers using trained fuser-Analytical and experimental results, Neural Network World 13(7): 925-934.
  • [53] Xu, L., Krzyzak, A. and Suen, C. (1992). Methods of combining multiple classifiers and their applications to handwriting recognition, IEEE Transactions on Systems, Man and Cybernetics 22(3): 418-435.
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-article-BPZ7-0008-0006
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.