Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników

Znaleziono wyników: 5

Liczba wyników na stronie
first rewind previous Strona / 1 next fast forward last
Wyniki wyszukiwania
Wyszukiwano:
w słowach kluczowych:  multi-class classification
help Sortuj według:

help Ogranicz wyniki do:
first rewind previous Strona / 1 next fast forward last
1
Content available remote A novel multi-class approach for early-stage prediction of sudden cardiac death
EN
Sudden cardiac death (SCD) is a complex issue that may occur in population groups with either known or unknown cardiovascular disease (CVD). Given the complex nature of SCD, the discovery of a suitable biomarker will prove essential in identifying individuals at risk of SCD, while discriminating it from patients with other cardiac pathologies as well as healthy individuals. Thus, this study aimed to develop an efficient approach to support a better comprehension of heart rate variability (HRV) as a predictive biomarker to identify SCD patients at an early stage. The present study proposed a novel multi-class classification approach using signal processing methods of HRV to predict SCD 10 min before its occurrence. The developed algorithm was qualitatively and quantitatively analyzed in terms of discriminating SCD patients from patients of heart failure and normal people. A total of 51 HRV signals of all three classes obtained from PhysioBank were processed to extract 32 features in each subject. The optimal feature selection was performed by a hybrid approach of sequential feature selection-random under sampling boosting algorithms. Multi-class classifiers, namely decision tree, support vector machine, and k-nearest neighbors were used for classification. An average classification accuracy of SCD prediction 10 min before occurrence was obtained as 83.33%. Therefore, this study suggests a new efficient approach for the early-stage prediction of SCD that is considerably different from that reported in the literature to date. However, to generalize the findings, the algorithm needs to be tested for a larger population group.
EN
We propose new methods for support vector machines using a tree architecture for multi-class classification. In each node of the tree, we select an appropriate binary classifier, using entropy and generalization error estimation, then group the examples into positive and negative classes based on the selected classifier, and train a new classifier for use in the classification phase. The proposed methods can work in time complexity between O(log2 N) and O(N), where N is the number of classes. We compare the performance of our methods with traditional techniques on the UCI machine learning repository using 10-fold cross-validation. The experimental results show that the methods are very useful for problems that need fast classification time or those with a large number of classes, since the proposed methods run much faster than the traditional techniques but still provide comparable accuracy.
3
Content available remote Extreme Classification under Limited Space and Time Budget
EN
We discuss a new framework for solving extreme classification (i.e., learning problems with an extremely large label space), in which we reduce the original problem to a structured prediction problem. Thanks to this we can obtain learning algorithms that work under a strict time and space budget. We mainly focus on a recently introduced algorithm, referred to as LTLS, which is to our best knowledge the first truly logarithmic time and space (in the number of labels) method for extreme classification. We compare this algorithm with two other approaches that also rely on transformation to structured prediction problems. The first algorithm encodes original labels as binary sequences. The second algorithm follows the label tree approach. The comparison shows the trade-off between computational complexity (in time and space) and predictive performance.
EN
Modern communication systems require robust, adaptable and high performance decoders for efficient data transmission. Support Vector Machine (SVM) is a margin based classification and regression technique. In this paper, decoding of Bose Chaudhuri Hocquenghem codes has been approached as a multi-class classification problem using SVM. In conventional decoding algorithms, the procedure for decoding is usually fixed irrespective of the SNR environment in which the transmission takes place, but SVM being a machinelearning algorithm is adaptable to the communication environment. Since the construction of SVM decoder model uses the training data set, application specific decoders can be designed by choosing the training size efficiently. With the soft margin width in SVM being controlled by an equation, which has been formulated as a quadratic programming problem, there are no local minima issues in SVM and is robust to outliers.
EN
If dataset is relatively small (e.g. number of samples is less than number of features) or samples are distorted by noise, regularized models built on that dataset often give better results than unregularized models. When problem is ill-conditioned, regularizaton is necessary in order to find solution. For data where neighbouring values are correlated (like in images or time series), not only individual weights, but also differences between them may be penalized in the model. This paper presents results of the experiment, in which several types of regularization (l2, l1, penalized differences) and their combinations were used in fitting logistic regression model (trained using one-vs.-rest strategy) to find which one of them works the best for various sizes of training set. Data used in the experiment came from MNIST dataset, which is publicly available.
first rewind previous Strona / 1 next fast forward last
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.