PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Tytuł artykułu

Minimum absolute error classifier design with generalization control

Autorzy
Wybrane pełne teksty z tego czasopisma
Identyfikatory
Warianty tytułu
Języki publikacji
EN
Abstrakty
EN
This paper introduces a new classifier design method, that is based on an extension of the classical Ho-Kashyap procedure. The proposed method uses absolute error rather than square errorto design a linear classifier. Additionally, easy control of generalization ability and outliers robustness is obtained. Finally, examples are giver to demonstrate the validity of the introduced method.
Rocznik
Strony
289--299
Opis fizyczny
Bibliogr. 19 poz., rys., tab.
Twórcy
autor
  • Institute of Electronics, Silesian University of Technology, Akademicka 16, 44-100 Gliwice, Poland
Bibliografia
  • [1] I. Barrondale and F.D.K. Roberts: An Improved Algorithm for Discrete L1 Linear Approximation. SIAM J. Numer. Anal., 10(5), (1973), 839-848.
  • [2] I. Barrondale and A. Young: Algorithms for Best L1 and L… Linear Approximations on a Discrete Set. Numerische Mathematik, 8 (1966), 295-306.
  • [3] G. Cauwenberghs and T. Poggio: Incremental and Decremental Support Vector Machine Learning. Adv. Neural Information Processing Systems, Cambridge MA, MIT Press, 13 (2001).
  • [4] R. Detrano, A. Janosi, W. Steinbrunn, M. Pfisterer, J. Schmid, S. Sandhu, K. Guppy, S. Lee and V. Froelicher: International application of a new probability algorithm for the diagnosis of coronary artery disease. American Journal of Cardiology, 64 (1989), 304-310.
  • [5] R.O. Duda and P.E. Hart: Pattern Classification and Scene Analysis, John Wiley&Sons, New York, 1973.
  • [6] P.J. Green: Iteratively Reweighted Least Squares for Maximum Likelihood Estimation, and Some Robust and Resistant Alternatives. J. Roy. Statist. Soc., B(46), (1984), 149-192.
  • [7] R. Herbrich, T. Graepel and C. Campbell: Bayes Point Machines. Journal of Machine Learning Research, 1 (2001), 245-279.
  • [8] Y.-C. Ho and R.L. Kashyap: An algorithm for linear inequalities and its applications. IEEE Trans. Elec. Comp., 14 (1965), 683-688.
  • [9] Y.-C. Ho and R.L. Kashyap: A class of iterative procedures for linear inequalities. J.SIAM Control, 4 (1966), 112-115.
  • [10] P.J. Huber: Robust Statistics. Wiley, New York, 1981.
  • [11] M.I. Jordan and R.A. Jacobs: Hierarchical mixture of experts and the EM algorithm. Neural Computations, 6(2), (1994), 181-214.
  • [12] P. McCullagh and J.A. Nelder: Generalized linear models. Chapman and Hall, London, 1983.
  • [13] R.M. Palhares and P.L.D. Peres: Robust Filtering with Guaranteed Energyto-peak Performance - an LMI approach. Automatica, 36 (2000), 851-858.
  • [14] B.D. Ripley: Pattern Recognition and Neural Networks. Cambridge University Press, Cambridge, 1996.
  • [15] J.T. Tou and R.C. Gonzalez: Pattern Recognition Principles. Adison-Wesley, London, 1974.
  • [16] J.G. Vanantwerp And R.D. Braatz: A Tutorial on Linear and Bilinear Matrix Inequalities. J. Proc. Cont., 10 (2000), 363-385.
  • [17] V. Vapnik: An Overview of Statistical Learning Theory. IEEE Trans. Neural Networks, 10(5), (1999), 988-999.
  • [18] V. Vapnik: Statistical Learning Theory. Wiley, New York, 1998.
  • [19] A. Webb: Statistical Pattern Recognition. Arnold, London, 1999.
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-article-BSW3-0002-0059
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.