Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników

Znaleziono wyników: 3

Liczba wyników na stronie
first rewind previous Strona / 1 next fast forward last
Wyniki wyszukiwania
Wyszukiwano:
w słowach kluczowych:  sparse autoencoder
help Sortuj według:

help Ogranicz wyniki do:
first rewind previous Strona / 1 next fast forward last
EN
Security threats, among other intrusions affecting the availability, confidentiality and integrity of IT resources and services, are spreading fast and can cause serious harm to organizations. Intrusion detection has a key role in capturing intrusions. In particular, the application of machine learning methods in this area can enrich the intrusion detection efficiency. Various methods, such as pattern recognition from event logs, can be applied in intrusion detection. The main goal of our research is to present a possible intrusion detection approach using recent machine learning techniques. In this paper, we suggest and evaluate the usage of stacked ensembles consisting of neural network (SNN) and autoencoder (AE) models augmented with a tree-structured Parzen estimator hyperparameter optimization approach for intrusion detection. The main contribution of our work is the application of advanced hyperparameter optimization and stacked ensembles together. We conducted several experiments to check the effectiveness of our approach. We used the NSL-KDD dataset, a common benchmark dataset in intrusion detection, to train our models. The comparative results demonstrate that our proposed models can compete with and, in some cases, outperform existing models.
EN
This paper proposes a fault detection method by extracting nonlinear features for nonstationary and stationary hybrid industrial processes. The method is mainly built on the basis of a sparse auto-encoder and a sparse restricted Boltzmann machine (SAE-SRBM), so as to take advantages of their adaptive extraction and fusion on strong nonlinear symptoms. In the present work, SAEs are employed to reconstruct inputs and accomplish feature extraction by unsupervised mode, and their outputs present a knotty problem of an unknown probability distribution. In order to solve it, SRBMs are naturally used to fuse these unknown probability distribution features by transforming them into energy characteristics. The contribution of this method is the capability of further mining and learning of nonlinear features without considering the nonstationary problem. Also, this paper introduces a method of constructing labeled and unlabeled training samples while maintaining time series features. Unlabeled samples can be adopted to train the part for feature extraction and fusion, while labeled samples can be used to train the classification part. Finally, a simulation on the Tennessee Eastman process is carried out to demonstrate the effectiveness and excellent performance on fault detection for nonstationary and stationary hybrid industrial processes.
3
Content available remote P300 based character recognition using sparse autoencoder with ensemble of SVMs
EN
In this study, a brain–computer interface (BCI) system known as P300 speller is used to spell the word or character without any muscle activity. For P300 signal classification, feature extraction is an important step. In this work, deep feature learning techniques based on sparse autoencoder (SAE) and stacked sparse autoencoder (SSAE) are proposed for feature extraction. Deep feature provides the abstract information about the signal. This work proposes fusion of deep features with the temporal features, which provides abstract and temporal information about the EEG signal. These deep feature and temporal feature are partially complement of each other to represent the EEG signal. For classification of the EEG signal, an ensemble of support vector machines (ESVM) is adopted as it helps to reduce the classifiers variability. In classifier ensemble system, the score of individual classifier is not at the same level. To transform these scores into a common level, min–max normalization is proposed prior to combining them. Min-max normalization scales the classifiers' score between 0 and 1. The experiments are conducted on three standard public datasets, dataset IIb of BCI Competition II, dataset II of the BCI Competition III and BNCI Horizon dataset. The experimental results show that the proposed method yields better or comparable performance compared to earlier reported techniques.
first rewind previous Strona / 1 next fast forward last
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.