PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Tytuł artykułu

Boosting Classifiers Built from Different Subsets of Features

Wybrane pełne teksty z tego czasopisma
Identyfikatory
Warianty tytułu
Języki publikacji
EN
Abstrakty
EN
We focus on the adaptation of boosting to representation spaces composed of different subsets of features. Rather than imposing a single weak learner to handle data that could come from different sources (e.g., images and texts and sounds), we suggest the decomposition of the learning task into several dependent sub-problems of boosting, treated by different weak learners, that will optimally collaborate during the weight update stage. To achieve this task, we introduce a new weighting scheme for which we provide theoretical results. Experiments are carried out and show that ourmethod works significantly better than any combination of independent boosting procedures.
Wydawca
Rocznik
Strony
89--109
Opis fizyczny
Bibliogr. 23 poz., tab., wykr.
Twórcy
autor
autor
autor
  • Université de Saint-Etienne, F-42000, St-Etienne, France, UMR-CNRS 5516, Laboratoire Hubert Curien, 18 rue du Professeur Benoit Lauras, F-42000, St-Etienne, France, janodet@univ-st-etienne.fr
Bibliografia
  • [1] Breiman, L.: Bagging Predictors, Machine Learning, 24(2), 1996, 123-140.
  • [2] Breiman, L.: Random Forests, Machine Learning, 45(1), 2001, 5-32.
  • [3] Burges, C. J. C.: A Tutorial on Support Vector Machines for Pattern Recognition, Data Mining and Knowledge Discovery, 2, 1998, 121-167.
  • [4] Callut, J., Dupont, P.: Inducing Hidden Markov Models to Model Long-Term Dependencies, Proc. of the 16th European Conference on Machine Learning (ECML'05), LNAI 3720, 2005.
  • [5] Cherkauer, K. J.: Human Expert-Level Performance on a Scientific Image Analysis Task by a System Using Combined Artificial Neural Networks, Working Notes, Integrating Multiple Learned Models for Improving and Scaling Machine Learning Algorithms Workshop, 13th National Conference on Artificial Intelligence, 1996.
  • [6] Cristianini, N., Shawe-Taylor, J.: An Introduction to Support VectorMachines and other Kernel-based Learning Methods, Cambridge University Press, 2000.
  • [7] Denis, F., Esposito, Y., Habrard, H.: Learning Rational Stochastic Languages, Proc. of the 19th Conference on Computational Learning Theory (COLT'06), LNAI 4005, 2006.
  • [8] Dietterich, T. G.: Ensemble Methods in Machine Learning, Proc. of the 1st International Workshop on Multiple Classifier Systems, LNCS 1857, 2000.
  • [9] Durbin, R., Eddy, S. R., Krogh, A., Mitchison, G.: Biological Sequence Analysis: Probabilistic Models of Proteins and Nucleic Acids, Cambridge University Press, 1999.
  • [10] Freund, Y., Schapire, R. E.: Experiments with a New Boosting Algorithms, Proc. of the 13th International Conference on Machine Learning (ICML'96), 1996.
  • [11] Freund, Y., Schapire, R. E.: A Decision-Theoretic Generalization of Online Learning and an Application to Boosting, Journal of Computer and System Sciences, 55(1), 1997, 119-139.
  • [12] Gama, J., Brazdil, P.: Cascade Generalization, Machine Learning, 41(3), 2000, 315-343.
  • [13] Garcia-Salicetti, S., Beumier, C., Chollet, G., Dorizzi, B., Leroux-Les-Jardins, J., Lunter, J., Ni, Y., Petrovska-Delacretaz, D.: BIOMET: A Multimodal Person Authentication Database Including Face, Voice, Fingerprint, Hand and Signature Modalities, Proc. of the 4th International Conference on Audio and Video-Based Biometric Person Authentication (AVBPA'03), LNCS 2688, 2003.
  • [14] Goodman, J.: A Bit of Progress in Language Modeling, Technical Report MSR-TR-2001-72, Microsoft Research, 2001. J.-C. Janodet et al. / 2-BOOST 109
  • [15] de la Higuera, C.: A Bibliographic Survey on Grammatical Inference, Pattern Recognition, 38(9), 2005, 1332-1348.
  • [16] Kearns, M. J., Vazirani, U. V.: An Introduction to Computational Learning Theory, M.I.T. Press, 1994.
  • [17] Kohavi, R.: A Study of Cross-Validation and Bootstrap for Accuracy Estimation and Model Selection, Proc. of the 15th International Joint Conference on Artificial Intelligence (IJCAI'95), 1995.
  • [18] Koltchinskii, V., Panchenko, D.: Empirical Margin Distributions and Bounding the Generalization Error of Combined Classifiers, Annals of Statistics, 30(1), 2002, 1-50.
  • [19] Meir, R., Raetsch, G.: An Introduction to Boosting and Leveraging, Advanced Lectures on Machine Learning, LNAI 2600, 2003.
  • [20] Schapire, R. E., Freund, Y., Bartlett, P., Lee, W. S.: Boosting the Margin: A New Explanation for the Effectiveness of Voting Methods, Annals of Statistics, 26, 1998, 1651-1686.
  • [21] Schapire, R. E., Singer, Y.: Improved Boosting Algorithms using Confidence-rated Predictions, Proc. of the 11th International Conference on Computational Learning Theory (COLT'98), 1998.
  • [22] Vapnik, V.: Statistical Learning Theory, JohnWiley, 1998.
  • [23] Wolpert, D. H.: Stacked Generalization, Neural Networks, 5(2), 1992, 241-259.
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-article-BUS8-0008-0043
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.