PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Tytuł artykułu

Two New Decomposition Algorithms for Training Bound-Constrained Support Vector Machines

Autorzy
Identyfikatory
Warianty tytułu
Języki publikacji
EN
Abstrakty
EN
Bound-constrained Support Vector Machine(SVM) is one of the stateof- art model for binary classification. The decomposition method is currently one of the major methods for training SVMs, especially when the nonlinear kernel is used. In this paper, we proposed two new decomposition algorithms for training bound-constrained SVMs. Projected gradient algorithm and interior point method are combined together to solve the quadratic subproblem effciently. The main difference between the two algorithms is the way of choosing working set. The first one only uses first order derivative information of the model for simplicity. The second one incorporate part of second order information into the process of working set selection, besides the gradient. Both algorithms are proved to be global convergent in theory. New algorithms is compared with the famous package BSVM. Numerical experiments on several public data sets validate the effciency of the proposed methods.
Rocznik
Strony
67--86
Opis fizyczny
Bibliogr. 31 poz., rys., tab.
Twórcy
autor
  • Research Center on Fictitious Economy & Data Science, University of Chinese Academy of Sciences, Key Laboratory of Big Data Mining and Knowledge Management, Chinese Academy of Sciences, Beijing, China
autor
  • Research Center on Fictitious Economy & Data Science, University of Chinese Academy of Sciences, Key Laboratory of Big Data Mining and Knowledge Management, Chinese Academy of Sciences, Beijing, China
autor
  • Research Center on Fictitious Economy & Data Science, University of Chinese Academy of Sciences, Key Laboratory of Big Data Mining and Knowledge Management, Chinese Academy of Sciences, Beijing, China
autor
  • Research Center on Fictitious Economy & Data Science, University of Chinese Academy of Sciences, Key Laboratory of Big Data Mining and Knowledge Management, Chinese Academy of Sciences, Beijing, China
Bibliografia
  • [1] Asuncion A., Newman D.,UCI Machine Learning Repository, University of California, Irvine, School of Information and Computer Sciences (2007) http://www. ics.uci.edu/~mlearn/MLRepository.html.
  • [2] Boser B., Guyon I., Vapnik V.A., A training algorithm for optimal margin classifiers, Proceedings of the 5th Annual ACM Workshop on Computational Learning Theory, ACM Press, 1992, 144-152.
  • [3] Bottou L., Stochastic gradient descent examples, 2007, http://leon.bottou. org/projects/sgd.
  • [4] Cortes C., Vapnik V., Support-vector networks, Machine Learning, 20, 3, 1995, 273-297.
  • [5] Franc V., Sonnenburg S., Optimized cutting plane algorithm for support vector machines, ICML 08: Proceedings of the 25th international conference on Machine learning, ACM Press 2008, 320-327.
  • [6] Gertz E., Wright S., Object-oriented software for quadratic programming, ACM Transactions on Mathematical Software, 29, 2001, 58-81.
  • [7] Hsieh C., Chang K., Li C.J., A comparison of methods for multi-class support vector machines, IEEE Transactions on Neural Networks, 13, 2002, 415-425.
  • [8] Hsieh C., Chang K., Lin C.J., Keerthi S., Sundararajan S., A dual coordinate descent method for large-scale linear SVM, Proceedings of the 25th international conference on Machine learning, ACM, 2008, 408-415.
  • [9] Hsu C.W., Lin C.J., A simple decomposition method for support vector machines, Machine Learning, 46, 1-3, 2002, 291-314.
  • [10] Joachims T., SVM1ight, http://svmlight.joachims.org/.
  • [11] Joachims T., Training linear SVMs in linear time, ACM SIGKDD International Conference On Knowledge Discovery and Data Mining, 2006, 217-226.
  • [12] Joachims T., Finley T., Yu C.N., Cutting-plane training of structural svms, Machine Learning, 77, 1, 2009, 27-59.
  • [13] Joachims T., Yu C.N., Sparse kernel svms via cutting-plane training, Machine Learning, Special Issue for European Conference on Machine Learning, 76, 2-3, 2009, 179-193.
  • [14] Mangasarian O., Musicant D., Successive overrelaxation for support vector machines, IEEE Transactions on Neural Networks, 10, 5, 1999, 1032-1037.
  • [15] Mangasarian O., Musicant D., Lagrangian support vector machines, Journal of Machine Learning Research, 1, 2001, 161-177.
  • [16] Mercer J., Functions of positive and negative type and their connection with the theory of integral equations, Philosophical Transactions of the Royal Society of London, 1909.
  • [17] Shalev-Shwartz S., Singer Y., Srebro N., Cotter A., Pegasos: Primal Estimated sub-Gradient Solber for SVM, Mathematical Programming, 127, 1, 2011, 3-30.
  • [18] Sun W.Y., Yuan Y.X., Optimization Theory and Methods: Nonlinear Programming, Springer, New York, USA, 2006.
  • [19] Friefi T., Cristianini N., Campbell C., The kernel-adatron algorithm: a fast and simple learning procedure for support vector machines, Proceedings of the Fifteenth International Conference on Machine Learning, Morgan Kaufmann Publishers, 1998.
  • [20] Vapnik V., The Nature of Statistical Learning Theory, Springer-Verlag New York, Inc., New York, NY, USA 1995.
  • [21] Vapnik V., Statistical Learning TheoryStatistical Learning Theory, Wiley-Interscience, September, 1998.
  • [22] Yuan G.X., Ho C.H., Lin C.J., Recent Advances of Large-scale Linear Classification, Proceedings of the IEEE, 100, 2012, 2584-2603.
  • [23] Zanni L., Serafini T., Zanghirati G., Parallel software for training large scale support vector machines on multiprocessor systems, Journal of Maching Learning Research, 7, 2006, 1467-1492.
  • [24] Osuna E., Freund R., Girosi F., Training support vector machines: An application to face detection, Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 1997, 276-285.
  • [25] Arnosti N.A., Kalita J.K., Cutting Plane Training for Linear Support Vector Machines, IEEE Transactions on Knowledge and Data Engineering, 25, 2013, 1186-1190.
  • [26] Platt J.-C., Fast training of support vector machines using sequential minimal optimization, in: B. Scholkopf, C.J.C. Burges, A.J. Smola(Eds.), Advances in kernel methods-support vector learning, MIT press, 1999, 185-208.
  • [27] Joachims T., Making large-scale SVM learning practical, in: B. Schoolkopf, C.J.C. Burges, A.J. Smola(Eds.), Advances in kernel methods-support vector learning, Cambridge, MA, MIT press, 1998.
  • [28] Saunders C., Stitson M.O., Weston J., Bottou L., Schoolkopf B., Smola A., Support vector machine reference manual, Egham, UK, Royal Holloway, University of London, Technical Report, NO.CSD-TR-98-03, 1998.
  • [29] Tian Y.J., Qi Z.Q., Ju X.C., Shi Y., Liu X.H., Nonparallel support vector machines for pattern classification, IEEE Trans. Cybernetics, 44, 7, 2013, 1067-1079.
  • [30] Qi Z., Tian Y., Shi Y., Successive Overrelaxation for Lapiacian Support Vector Machine, IEEE Transaction on Neural Networks and Learning System, DOI: 10.1109/TNNLS.2014.2320738, 2014.
  • [31] Qi Z., Tian Y., Shi Y., Structural Twin Support Vector Machine for Classification, Knowledge-Based Systems, 43, 2013, 74-81.
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-9a646fab-db73-45d2-aa6b-60caa50fb397
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.