In this study, we introduce an entropy-based method to regularize the AdaBoost algorithm. The AdaBoost algorithm is a well-known algorithm used to create aggregated classifiers. In many real-world classification problems in addition to paying special attention classification accuracy of the final classifier, great focus is placed on tuning the number of the so-called weak learners, which are aggregated by the final (strong)classifier. The proposed method is able to improve the AdaBoost algorithm in terms of both criteria. While many approaches to the regularization of boosting algorithms can be complicated, the proposed method is straightforward and easy to implement. We compare the results of the proposed method (EntropyAdaBoost) with the original AdaBoost and also with its regularized version, є-AdaBoost on several classification problems. It is shown that the proposed methods of EntropyAdaBoost and є-AdaBoost are strongly complementary when the improvement of AdaBoost is considered.
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.