The aim of this study was to build a machine learning model to discriminate Attention Deficit Hyperactivity Disorder (ADHD) patients and healthy controls using information from both time and frequency analysis of Event Related Potentials (ERP) obtained from Electroencephalography (EEG) signals while participants performed an auditory oddball task. The study included 23 unmedicated ADHD patients and 23 healthy controls. The EEG signal was analyzed in time domain by nonlinear brain dynamics and morphological features, and in time-frequency domain with wavelet coefficients. Selected features were applied to various machine learning techniques including; Multilayer Perceptron, Naïve Bayes, Support Vector Machines, k-nearest neighbor, Adaptive Boosting, Logistic Regression and Random Forest to classify ADHD patients and healthy controls. Longer P300 latencies and smaller P300 amplitudes were observed in ADHD patients relative to controls. In fractal dimension calculation relative to the control group, the ADHD group demonstrated reduced complexity. In addition, certain wavelet coefficients provided significantly different values in both groups. Combining these extracted features, our results indicated that Multilayer Perceptron method provided the best classification with an accuracy rate of 91.3% and a high level of reliability of concurrence (Kappa = 0.82). The results showed that combining time and frequency domain features can be a useful and discriminative for diagnostic purposes in ADHD. The study presents a supporting diagnostic tool that uses EEG signal processing and machine learning algorithms. The findings would be helpful in the objective diagnosis of ADHD.
2
Dostęp do pełnego tekstu na zewnętrznej witrynie WWW
Bayesian networks have many practical applications due to their capability to represent joint probability distribution in many variables in a compact way. There exist efficient reasoning methods for Bayesian networks. Many algorithms for learning Bayesian networks from empirical data have been developed. A well-known problem with Bayesian networks is the practical limitation for the number of variables for which a Bayesian network can be learned in reasonable time. A remarkable exception here is the Chow/Liu algorithm learning tree-like Bayesian networks. However, also this algorithm has an important limitation, related to space consumption. The space required is quadratic in the number of variables. The paper presents a novel algorithm overcoming this limitation for the tree-like class of Bayesian networks. The new algorithm space consumption grows linearly with the number of variables while the execution time is comparable with the Chow/Liu algorithm. This opens new perspectives in construction of Bayesian networks from data containing thousands and more variables, e.g. in automatic text categorization.
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.