Ograniczanie wyników
Czasopisma help
Autorzy help
Lata help
Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników

Znaleziono wyników: 53

Liczba wyników na stronie
first rewind previous Strona / 3 next fast forward last
Wyniki wyszukiwania
Wyszukiwano:
w słowach kluczowych:  maszyna wektorów nośnych
help Sortuj według:

help Ogranicz wyniki do:
first rewind previous Strona / 3 next fast forward last
1
Content available remote Outlier detection in EEG signals
EN
In this paper, the topic of detection of outliers in EEG signals was discussed, which facilitates making decisions about the diagnosis of a patient based on this study. We used two methods to detect outliers: the support vector machine and the k nearest neighbors method. The experiments were performed on a publicly available dataset containing EEG test results for 500 patients. The obtained results showed that the methods we used allow for the outlier detection efficiency at the level of 93%.
PL
W niniejszej pracy podjęto temat detekcji wyjątków w sygnałach EEG, co pozwala na ułatwienie podejmowania decyzji co do diagnozy pacjenta na podstawie tego badania. Do detekcji wyjątków wykorzystaliśmy dwie metody: maszynę wektorów nośnych i metodę k najblizszych sąsiadów. Eksperymenty zostały przeprowadzone na ogólnodostępnym zbiorze danych zawieraj ącym wyniki badania EEG dla 500 pacjentów. Uzyskane wyniki pokazały, że u żyte przez nas metody pozwalają na uzyskanie skuteczności detekcji wyjątków na poziomie 93%.
EN
Sorting coal and gangue is important in raw coal production; accurately identifying coal and gangue is a prerequisite for effectively separating coal and gangue. The method of extracting coal and gangue using image grayscale information can effectively identify coal and gangue, but the recognition rate of the sorting process based on image grayscale information needs to substantially higher than that which is needed to meet production requirements. A sorting method of coal and gangue using object surface grayscale-gloss characteristics is proposed to improve the recognition rate of coal and gangue. Using different comparative experiments, bituminous coal from the Huainan area was used as the experimental object. It was found that the number of pixel points corresponding to the highest level grey value of the grayscale moment and illumination component of the coal and gangue images were combined into a total discriminant value and used as input for the best classification of coal and gangue using the GWO-SVM classification model. The recognition rate could reach up to 98.14%. This method sorts coal and gangue by combining surface greyness and glossiness features, optimizes the traditional greyness-based recognition method, improves the recognition rate, makes the model generalizable, enriches the research on coal and gangue recognition, and has theoretical and practical significance in enterprise production operations.
PL
Sortowanie węgla i skały płonnej jest ważne w produkcji węgla surowego; dokładna identyfikacja węgla i skały płonnej jest warunkiem wstępnym skutecznego oddzielenia tych surowców. Metoda rozdzielenia węgla i skały płonnej przy użyciu informacji w skali szarości obrazu może skutecznie identyfikować węgiel i skałę płonną, ale stopień rozpoznawania procesu sortowania w oparciu o te informacje być znacznie wyższy niż wymagany do spełnienia wymagań produkcyjnych. W artykule zaproponowano metodę sortowania węgla i skały płonnej wykorzystującą charakterystykę połysku i skali szarości powierzchni obiektu w celu poprawy szybkości rozpoznawania węgla i skały płonnej. W badaniach wykorzystano próbki węgla kamiennego z obszaru Huainan. Stwierdzono, że liczbę punktów pikseli odpowiadającą najwyższemu poziomowi szarości momentu w skali szarości i składowej oświetlenia obrazów węgla i skały płonnej połączono w całkowitą wartość dyskryminującą i wykorzystano jako dane wejściowe dla najlepszej klasyfikacji węgla i skały płonnej przy użyciu modelu klasyfikacji GWO-SVM. Wskaźnik rozpoznawalności może osiągnąć nawet 98,14%. Ta metoda sortowania węgla i skały płonnej poprzez połączenie cech szarości i połysku powierzchni, optymalizuje tradycyjną metodę rozpoznawania w oparciu o szarość, poprawia współczynnik rozpoznawania, umożliwia uogólnienie modelu, wzbogaca badania nad rozpoznawaniem węgla i skały płonnej, ma znaczenie teoretyczne i praktyczne w operacjach produkcyjnych przedsiębiorstwa.
EN
This study focuses on the problem of mapping impervious surfaces in urban areas and aims to use remote sensing data and orthophotos to accurately classify and map these surfaces. Impervious surface indices and green space assessments are widely used in land use and urban planning to evaluate the urban environment. Local governments also rely on impervious surface mapping to calculate stormwater fees and effectively manage stormwater runoff. However, accurately determining the size of impervious surfaces is a significant challenge. This study proposes the use of the Support Vector Machines (SVM) method, a pattern recognition approach that is increasingly used in solving engineering problems, to classify impervious surfaces. The research results demonstrate the effectiveness of the SVM method in accurately estimating impervious surfaces, as evidenced by a high overall accuracy of over 90% (indicated by the Cohen’s Kappa coefficient). A case study of the “Parkowo-Leśne” housing estate in Warsaw, which covers an area of 200,000 m², shows the successful application of the method. In practice, the remote sensing imagery and SVM method allowed accurate calculation of the area of the surface classes studied. The permeable surface represented about 67.4% of the total complex and the impervious surface corresponded to the remaining 32.6%. These results have implications for stormwater management, pollutant control, flood control, emergency management, and the establishment of stormwater fees for individual properties. The use of remote sensing data and the SVM method provides a valuable approach for mapping impervious surfaces and improving urban land use management.
PL
Niniejsze badanie koncentruje się na problemie wyznaczania powierzchni nieprzepuszczalnych na obszarach miejskich i ma na celu wykorzystanie danych teledetekcyjnych i ortofotomap do dokładnej klasyfikacji i wizualizacji tych powierzchni. Wskaźniki powierzchni nieprzepuszczalnych i oceny terenów zielonych są szeroko stosowane w planowaniu przestrzennym i urbanistycznym do oceny środowiska miejskiego. Władze lokalne polegają również na oszacowaniu wielkości powierzchni nieprzepuszczalnych w celu obliczania opłat za wodę deszczową i skutecznego zarządzania odpływem wody deszczowej. Jednak dokładne określenie wielkości nieprzepuszczalnych powierzchni jest poważnym wyzwaniem. W niniejszym badaniu zaproponowano wykorzystanie metody Support Vector Machines (SVM), podejścia opartego na rozpoznawaniu wzorców, które jest coraz częściej stosowane w rozwiązywaniu problemów inżynieryjnych, do klasyfikacji powierzchni nieprzepuszczalnych. Wyniki badań pokazują skuteczność metody SVM w dokładnym szacowaniu powierzchni nieprzepuszczalnych, o czym świadczy wysoka ogólna precyzja wynosząca ponad 90% (na co wskazuje współczynnik Kappa Cohena). Studium przypadku osiedla „Parkowo-Leśne” w Warszawie o powierzchni 200 000 m² pokazuje skuteczne zastosowanie metody. Wyniki wskazują, że powierzchnie przepuszczalne stanowiły około 67,4% całego kompleksu, podczas gdy powierzchnie nieprzepuszczalne stanowiły pozostałe 32,6%. Wyniki te mogą mieć wpływ na zarządzanie wodami opadowymi, kontrolę zanieczyszczeń, zapobieganie powodziom, zarządzanie kryzysowe i ustalanie opłat za wodę opadową dla poszczególnych nieruchomości. Wykorzystanie danych teledetekcyjnych i metody SVM zapewnia cenne podejście do wizualizacji powierzchni nieprzepuszczalnych i poprawy zarządzania użytkowaniem gruntów miejskich.
EN
Purpose: In this study, the artificial intelligence techniques namely Artificial Neural Network, Random Forest, and Support Vector Machine are employed for PM 2.5 modelling. The study is carried out in Rohtak city of India during paddy stubble burning months i.e., October and November. The different models are compared to check their respective efficacies and also sensitivity analysis is performed to know about the most vital parameter in PM 2.5 modelling. Design/methodology/approach: The air pollution data of October and November months from the year 2016 to 2020 was collected for the study. The months of October and November are chosen as paddy stubble burning and major festivities using fireworks occur during these months. The untoward data entries viz. zero values, blank data, etc. were eliminated from the gathered data set and thereafter 231 observations of each parameter were left for the conduct of the presented study. The different models i.e., ANN, RF, SVM, etc. had PM 2.5 as an output variable while relative humidity, sulfur dioxide, nitrogen dioxide, nitric oxide, carbon monoxide, ozone, temperature, solar radiation, wind direction and wind speed acted as input variables. The prototypes created from the training data set are verified on the testing data set. A sensitivity analysis is also done to quantify impact of various parameters on output variable i.e., PM 2.5. Findings: The performance of the SVM_RBF based model turned out to be the best with the performance parameters being the coefficient of determination, root mean square error, and mean absolute error. In the sensitivity test, sulphur dioxide (SO2) was adjudged as the most vital variable. Research limitations/implications: The quantification capacity of the generated models may go beyond the used data set of observations. Practical implications: The artificial intelligence techniques provide precise estimation and forecasting of PM 2.5 in the air during paddy stubble burning months of October and November. Originality/value: Unlike the past research work that focus on modelling of various air pollution parameters, this study in specific focuses on the modelling of most vital air pollutant i.e., PM 2.5 that too specifically during the paddy stubble burning months of October and November when the air pollution is at its peak in northern India.
EN
Standard time is a key indicator to measure the production efficiency of the sewing department, and it plays a vital role in the production forecast for the apparel industry. In this article, the grey correlation analysis was adopted to identify seven sources as the main influencing factors for determination of the standard time in the sewing process, which are sewing length, stitch density, bending stiffness, fabric weight, production quantity, drape coefficient, and length of service. A novel forecasting model based on support-vector machine (SVM) with particle swarm optimization (PSO) is then proposed to predict the standard time of the sewing process. On the ground of real data from a clothing company, the proposed forecasting model is verified by evaluating the performance with the squared correlation coefficient (R2) and mean square error (MSE). Using the PSO-SVM method, the R2 and MSE are found to be 0.917 and 0.0211, respectively. In conclusion, the high accuracy of the PSO-SVM method presented in this experiment states that the proposed model is a reliable forecasting tool for determination of standard time and can achieve good predicted results in the sewing process.
EN
The cost overrun in road construction projects in Iraq is one of the major problems that face the construction of new roads. To enable the concerned government agencies to predict the final cost of roads, the objective this paper suggested is to develop an early cost estimating model for road projects using a support vector machine based on (43) sets of bills of quantity collected in Baghdad city in Iraq. As cost estimates are required at the early stages of a project, consideration was given to the fact that the input data for the support vector machine model could be easily extracted from sketches or the project's scope definition. The data were collected from contracts awarded by the Mayoralty of Baghdad for completed projects between 2010-2013. Mathematical equations were constructed using the Support Vector Machine Algorithm (SMO) technique. An average of accuracy (AA) (99.65%) and coefficient of determination (R2) (97.63%) for the model was achieved by the created prediction equations.
PL
Celem pracy jest ocena jakości modelu oparta na Maszynie Wektorów Nośnych SVM pod kątem jej przydatności w wirtualnym uruchomieniu - do zastosowania na potrzeby wirtualnego bliźniaka. Przedstawione wyniki badań są ściśle skorelowane z Przemysłem 4.0, którego główną ideą jest integracja inteligentnych maszyn, systemów i informatyki. Jednym z celów jest wprowadzenie możliwości elastycznej zmiany asortymentu oraz zmian w systemach produkcyjnych. Wirtualne uruchomienie może zostać użyte do stworzenia modelu symulacyjnego obiektu, na potrzeby szkolenia operatorów. Jednym z działów wirtualnego rozruchu jest cyfrowy bliźniak. Jest to wirtualna reprezentacja instalacji lub urządzenia, czy też maszyny. Dzięki zastosowaniu wirtualnego bliźniaka, możliwe jest odwzorowanie różnych procesów w celu obniżenia kosztów procesu i przyspieszenia procesu testowania. W pracy zaproponowano współczynnik oceny jakości modelu oparty na SVM. Współczynnik ten bierze pod uwagę wiedzę ekspercką oraz metody używane do oceny jakości modelu - Znormalizowany Błąd Średniokwadratowy NRMSE (ang. Normalized Root Mean Square Error) oraz Znormalizowany Maksymalny Błąd ME (ang. Maximum Error). Wspomniane metody są powszechnie stosowane do oceny jakości modelu, jednak dotychczas nie były używane równocześnie. W każdej z metod uwzględniany jest inny aspekt dotyczący modelu. Zaproponowany współczynnik umożliwia podjęcie decyzji, czy dany model może zostać użyty do stworzenia wirtualnego bliźniaka. Takie podejście pozwala na testowanie modeli w sposób automatyczny lub półautomatyczny.
EN
This paper proposes a model quality assessment method based on Support Vector Machine, which can be used to develop a digital twin. This work is strongly connected with Industry 4.0, in which the main idea is to integrate machines, devices, systems, and IT. One of the goals of Industry 4.0 is to introduce flexible assortment changes. Virtual commissioning can be used to create a simulation model of a plant or conduct training for maintenance engineers. One branch of virtual commissioning is a digital twin. The digital twin is a virtual representation of a plant or a device. Thanks to the digital twin, different scenarios can be analyzed to make the testing process less complicated and less time-consuming. The goal of this work is to propose a coefficient that will take into account expert knowledge and methods used for model quality assessment (such as Normalized Root Mean Square Error - NRMSE, Maximum Error - ME). NRMSE and ME methods are commonly used for this purpose, but they have not been used simultaneously so far. Each of them takes into consideration another aspect of a model. The coefficient allows deciding whether the model can be used for digital twin appliances. Such an attitude introduces the ability to test models automatically or in a semi-automatic way.
EN
Squirrel cage induction motors suffer from numerous faults, for example cracks in the rotor bars. This paper aims to present a novel algorithm based on Least Squares Support Vector Machine (LS-SVM) for detection partial rupture rotor bar of the squirrel cage asynchronous machine. The stator current spectral analysis based on FFT method is applied in order to extract the fault frequencies related to rotor bar partial rupture. Afterward the LS-SVM approach is established as monitoring system to detect the degree of rupture rotor bar. The training and testing data sets used are derived from the spectral analysis of one stator phase current, containing information about characteristic harmonics related to the partial rupture rotor bar. Satisfactory and more accurate results are obtained by applying LS-SVM to fault diagnosis of rotor bar.
EN
Recently, the analysis of medical imaging is gaining substantial research interest, due to advancements in the computer vision field. Automation of medical image analysis can significantly improve the diagnosis process and lead to better prioritization of patients waiting for medical consultation. This research is dedicated to building a multi-feature ensemble model which associates two independent methods of image description: textural features and deep learning. Different algorithms of classification were applied to single-phase computed tomography images containing 8 subtypes of renal neoplastic lesions. The final ensemble includes a textural description combined with a support vector machine and various configurations of Convolutional Neural Networks. Results of experimental tests have proved that such a model can achieve 93.6% of weighted F1-score (tested in 10-fold cross validation mode). Improvement of performance of the best individual predictor totalled 3.5 percentage points.
EN
This study offers two Support Vector Machine (SVM) models for fault detection and fault classification, respectively. Different short circuit events were generated using a 154 kV transmission line modeled in MATLAB/Simulink software. Discrete Wavelet Transform (DWT) is performed to the measured single terminal current signals before fault detection stage. Three level wavelet energies obtained for each of three-phase currents were used as input features for the detector. After fault detection, half cycle (10 ms) of three-phase current signals was recorded by 20 kHz sampling rate. The recorded currents signals were used as input parameters for the multi class SVM classifier. The results of the validation tests have demonstrated that a quite reliable, fault detection and classification system can be developed using SVM. Generated faults were used to training and testing of the SVM classifiers. SVM based classification and detection model was fully implemented in MATLAB software. These models were comprehensively tested under different conditions. The effects of the fault impedance, fault inception angle, mother wavelet, and fault location were investigated. Finally, simulation results verify that the offered study can be used for fault detection and classification on the transmission line.
EN
In this paper, support vector machines (SVMs), least squares SVMs (LSSVMs), relevance vector machines (RVMs), and probabilistic classification vector machines (PCVMs), are compared on sixteen binary and multiclass medical datasets. Particular emphasis is put on the comparison among the commonly used Gaussian radial basis function (GRBF) kernel, and the relatively new generalized min–max (GMM) kernel and exponentiated-GMM (eGMM) kernel. Since most medical decisions involve uncertainty, a postprocessing approach based on Platt’s method and pairwise coupling is employed to produce probabilistic outputs for prediction uncertainty assessment. The extensive empirical study illustrates that the SVM classifier using the tuning-free GMM kernel (SVM-GMM) shows good usability and broad applicability, and exhibits competitive performance against some state-of-the-art methods. These results indicate that SVM-GMM can be used as the first-choice method when selecting an appropriate kernel-based vector machine for medical diagnosis. As an illustration, SVM-GMM efficiently achieves a high accuracy of 98.92% on the thyroid disease dataset consisting of 7200 samples.
EN
The present research applies six empirical, three statistical, and two soft computing methods to predict water saturation of an oil reservoir. The employed empirical models are ‘Archie (Trans AIME 146(1):54–62, 1942),’ ‘DeWitte (Oil Gas J 49(16):120–134, 1950),’ ‘Poupon et al. (J Petrol Technol 6(6):27–34, 1954),’ ‘Simandoux (Revue deI’Institut Francais du.Petrol, 1963),’ ‘Poupon and Leveaux (1971),’ and ‘Schlumberger (Log interpretation principles/applications, p. 235, 7th printing. Houston, 1998)’; statistical methods are ‘multiple variable regression,’ ‘fine tree, medium tree, coarse tree-based regression tree,’ and ‘bagged tree, boosted tree-based tree ensembles’; and soft computing techniques are ‘support vector machine (SVM)’ and ‘Levenberg–Marquardt (LM), Bayesian regularization (BR), and scaled conjugate gradient (SCG)- based artificial neural network (ANN).’ In addition, log variables are ranked based on their significance in water saturation modeling. To achieve the goals, 521 data points are selected from three wells. Each data point has laboratory-derived core water saturation information and six well log features, such as gamma ray (GR), bulk density (RHOB), sonic travel time (DT), true resistivity (LLD), neutron porosity (φN), and Depth. Statistical indexes, namely regression coefficient, mean squared error, root mean squared error, average absolute percentage error, minimum absolute error percentage, and maximum absolute error percentage, are used to compare the prediction efficiency of study methods. Results show that the empirical models provide exceedingly poor prediction efficiency. Within the study models, fine tree, medium tree-based regression tree; bagged tree, boosted tree-based tree ensembles; fine Gaussian SVM; ANN with LM; and ANN with BR are very efficient predictive strategies. The log ranking reveals that GR and DT are the most important, whereas RHOB and φN are the least vital predictor variables in water saturation prediction.
EN
In this paper, we compare the following machine learning methods as classifiers for sentiment analysis: k – nearest neighbours (kNN), artificial neural network (ANN), support vector machine (SVM), random forest. We used a dataset containing 5,000 movie reviews in which 2,500 were marked as positive and 2,500 as negative. We chose 5,189 words which have an influence on sentence sentiment. The dataset was prepared using a term document matrix (TDM) and classical multidimensional scaling (MDS). This is the first time that TDM and MDS have been used to choose the characteristics of text in sentiment analysis. In this case, we decided to examine different indicators of the specific classifier, such as kernel type for SVM and neighbour count in kNN. All calculations were performed in the R language, in the program R Studio v 3.5.2. Our work can be reproduced because all of our data sets and source code are public.
PL
Niniejszy artykuł przedstawia proces dostosowania parametrów modelu maszyny wektorów nośnych, który posłuży do zbadania wpływu wartości parametru długości cyklu sygnalizacji świetlnej na jakość ruchu. Badania przeprowadzono z użyciem danych pozyskanych w trakcie przeprowadzonych symulacji w autorskim symulatorze ruchu ulicznego. W artykule przedstawiono i omówiono wyniki poszukiwania optymalnej wartości parametru długości cyklu sygnalizacji świetlnej.
EN
This article presents the process of adapting support vector machine model’s parameters used for studying the effect of traffic light cycle length parameter’s value on traffic quality. The survey is carried out using data collected during running simulations in author’s traffic simulator. The article shows results of searching for optimum traffic light cycle length parameter’s value.
EN
The paper presents the idea of connecting the concepts of the Vapnik’s support vector machine with Pawlak’s rough sets in one classification scheme. The hybrid system will be applied to classifying data in the form of intervals and with missing values [1]. Both situations will be treated as a cause of dividing input space into equivalence classes. Then, the SVM procedure will lead to a classification of input data into rough sets of the desired classes, i.e. to their positive, boundary or negative regions. Such a form of answer is also called a three–way decision. The proposed solution will be tested using several popular benchmarks.
EN
Particulate matters (PMs) are considered as one of the air pollutants generally associated with poor air quality in both outdoor and indoor environments. The composition, distribution and size of these particles hazardously afect the human health causing cardiovascular health problems, lung dysfunction, respiratory problems, chronic obstructive pulmonary disease and lungs cancer. Classifcation models developed by analyzing mass concentration time series data of atmospheric particulate matter can be used for the prediction of air quality and for issuing warnings to protect the health of the public. In this study, mass concentration time series data of both outdoor and indoor particulates matters PM2.5 (aerodynamics size up to 2.5 μ) and PM10.0 (aerodynamics size up to 10.0 μ) were acquired using Haz-Dust EPAM-5000 from six diferent locations of the Muzafarabad city, Azad Kashmir. The linear and nonlinear approaches were used to extract mass concentration time series features of the indoor and outdoor atmospheric particulates. These features were given as an input to the robust machine learning classifers. The support vector machine (SVM) kernels, ensemble classifers, decision tree and K-nearest neighbors (KNN) are used to classify the indoor and outdoor particulate matter time series. The performance was estimated in terms of area under the curve (AUC), accuracy, true negative rate, true positive rate, negative predictive value and positive predictive value. The highest accuracy (95.8%) was obtained using cubic and coarse Gaussian SVM along with the cosine and cubic KNN, while the highest AUC, i.e., 1.00, is obtained using fne Gaussian and cubic SVM as well as with the cubic and weighted KNN.
EN
Precise estimation of river fow in catchment areas has a signifcant role in managing water resources and, particularly, mak ing frm decisions during food and drought crises. In recent years, diferent procedures have been proposed for estimating river fow, among which hybrid artifcial intelligence models have garnered notable attention. This study proposes a hybrid method, so-called support vector machine–artifcial fora (SVM-AF), and compares the obtained results with outcomes of wavelet support vector machine models and Bayesian support vector machine. To estimate discharge value of the Dez river basin in the southwest of Iran, the statistical daily watering data recorded by hydrometric stations located at upstream of the dam over the years 2008–2018 were investigated. Four performance criteria of coefcient of determination (R2 ), rootmean-square error, mean absolute error, and Nash–Sutclife efciency were employed to evaluate and compare performances of the models. Comparison of the models based on the evaluation criteria and Taylor’s diagram showed that the proposed hybrid SVM-AF with the correlation coefcient R2 = 0.933–0.985, root-mean-square error RMSE = 0.008–0.088 m3 /s, mean absolute error MAE = 0.004–0.040 m3 /s, and Nash-Sutclife coefcient NS = 0.951–0.995 had the best performance in estimating daily fow of the river. The estimation results showed that the proposed hybrid SVM-AF model outperformed other models in efciently predicting fow and daily discharge.
EN
The purpose of the work was to predict the selected product parameters of the dry separation process using a pneumatic sorter. From the perspective of application of coal for energy purposes, determination of process parameters of the output as: ash content, moisture content, sulfur content, calorific value is essential. Prediction was carried out using chosen machine learning algorithms that proved to be effective in forecasting output of various technological processes in which the relationships between process parameters are non-linear. The source of data used in the work were experiments of dry separation of coal samples. Multiple linear regression was used as the baseline predictive technique. The results showed that in the case of predicting moisture and sulfur content this technique was sufficient. The more complex machine learning algorithms like support vector machine (SVM) and multilayer perceptron neural network (MPL) were used and analyzed in the case of ash content and calorific value. In addition, k-means clustering technique was applied. The role of cluster analysis was to obtain additional information about coal samples used as feed material. The combination of techniques such as multilayer perceptron neural network (MPL) or support vector machine (SVM) with k-means allowed for the development of a hybrid algorithm. This approach has significantly increased the effectiveness of the predictive models and proved to be a useful tool in the modeling of the coal enrichment process.
PL
Celem pracy było prognozowanie wybranych parametrów produktu procesu suchej separacji za pomocą sortera pneumatycznego. Z punktu widzenia zastosowania węgla do celów energetycznych niezbędne jest określenie parametrów procesowych wydobycia, takich jak: zawartość popiołu, zawartość wilgoci, zawartość siarki czy wartość kaloryczna. Prognozowanie przeprowadzono przy użyciu wybranych algorytmów uczenia maszynowego, które okazały się skuteczne w prognozowaniu wyjścia różnych procesów technologicznych, w których zależności między parametrami procesu są nieliniowe. Źródłem danych wykorzystanych w pracy były eksperymenty procesu suchej separacji węgla. Zastosowano wieloraką regresję liniową jako bazową metodę predykcyjną. Wyniki pokazały, że w przypadku przewidywania zawartości wilgoci i siarki technika ta była wystarczająca. Bardziej złożone algorytmy uczenia maszynowego, takie jak maszyna wektorów nośnych (SVM) i perceptron wielowarstwowy (MLP) zostały wykorzystane i przeanalizowane w przypadku zawartości popiołu i wartości opałowej. Ponadto wdrożono technikę k-średnich. Rolą analizy skupień było uzyskanie dodatkowych informacji na temat próbek węgla będących wejściem procesu. Połączenie technik, takich jak perceptron wielowarstwowy (MLP) lub maszyna wektorów nośnych (SVM) z metodą k-średnich pozwoliło na opracowanie hybrydowego algorytmu. Takie podejście znacznie zwiększyło efektywność modeli predykcyjnych i okazało się użytecznym narzędziem w modelowaniu procesu wzbogacania węgla.
EN
The useful life time of equipment is an important variable related to system prognosis, and its accurate estimation leads to several competitive advantage in industry. In this paper, Remaining Useful Lifetime (RUL) prediction is estimated by Particle Swarm optimized Support Vector Machines (PSO+SVM) considering two possible pre-processing techniques to improve input quality: Empirical Mode Decomposition (EMD) and Wavelet Transforms (WT). Here, EMD and WT coupled with SVM are used to predict RUL of bearing from the IEEE PHM Challenge 2012 big dataset. Specifically, two cases were analyzed: considering the complete vibration dataset and considering truncated vibration dataset. Finally, predictions provided from models applying both pre-processing techniques are compared against results obtained from PSO+SVM without any pre-processing approach. As conclusion, EMD+SVM presented more accurate predictions and outperformed the other models.
PL
Okres użytkowania sprzętu jest ważną zmienną związaną z prognozowaniem pracy systemu, a możliwość jego dokładnej oceny daje zakładom przemysłowym znaczną przewagę konkurencyjną. W tym artykule pozostały czas pracy (Remaining Useful Life, RUL) szacowano za pomocą maszyn wektorów nośnych zoptymalizowanych rojem cząstek (SVM+PSO) z uwzględnieniem dwóch technik przetwarzania wstępnego pozwalających na poprawę jakości danych wejściowych: empirycznej dekompozycji sygnału (Empirical Mode Decomposition, EMD) oraz transformat falkowych (Wavelet Transforms, WT). W niniejszej pracy, EMD i falki w połączeniu z SVM wykorzystano do prognozowania RUL łożyska ze zbioru danych IEEE PHM Challenge 2012 Big Dataset. W szczególności, przeanalizowano dwa przypadki: uwzględniający kompletny zestaw danych o drganiach oraz drugi, biorący pod uwagę okrojoną wersję tego zbioru. Prognozy otrzymane na podstawie modeli, w których zastosowano obie techniki przetwarzania wstępnego porównano z wynikami uzyskanymi za pomocą PSO + SVM bez wstępnego przetwarzania danych. Wyniki pokazały, że model EMD + SVM generował dokładniejsze prognozy i tym samym przewyższał pozostałe badane modele.
EN
Diabetes mellitus (DM) is one of the most widespread and rapidly growing diseases. With its advancement, DM-related complications are also increasing. We used characteristic features of toe photoplethysmogram for the detection of type-2 DM using support vector machine (SVM). We collected toe PPG signal, from 58 healthy and 83 type-2 DM subjects. From each PPG signal 37 different features were extracted for further classification. To improve the performance of SVM and reduce the noisy data we employed hybrid feature selection technique that reduces the feature set of 37 to 10 on the basis of majority voting. Using 10 selected features set, we gained an accuracy of 97.87%, sensitivity of 98.78% and specificity of 96.61%. Further for the validation of our method we need to do random population test, so that it can be used as a non-invasive screening tool. Photoplethysmogram is an economic, technically easy and completely non-invasive method for both physician and subject. With the high accuracy that we obtained, we hope that our work will help the clinician in screening of diabetes and adopting suitable treatment plan for preventing end organ damage.
first rewind previous Strona / 3 next fast forward last
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.