Ograniczanie wyników
Czasopisma help
Autorzy help
Lata help
Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników

Znaleziono wyników: 60

Liczba wyników na stronie
first rewind previous Strona / 3 next fast forward last
Wyniki wyszukiwania
Wyszukiwano:
w słowach kluczowych:  maszyna wektorów nośnych
help Sortuj według:

help Ogranicz wyniki do:
first rewind previous Strona / 3 next fast forward last
PL
W artykule przedstawiono badania dotyczące zastosowania analizy głównych składowych (PCA) w połączeniu z maszynami wektorów nośnych (SVM) do klasyfikacji obiektów powietrznych na podstawie parametrów kinematycznych. Wygenerowano syntetyczne zbiory danych reprezentujące różne typy obiektów, takie jak samoloty, drony, ptaki i balony, opisane cechami lotu, m.in. średnią wysokością, prędkością, przyspieszeniem i długością trajektorii. Analiza PCA została wykorzystana do redukcji wymiarowości i wizualizacji separowalności danych, a klasyfikator SVM - do nadzorowanej klasyfikacji w przestrzeni zredukowanych cech. Wyniki wskazują, że połączenie PCA i SVM umożliwia skuteczną klasyfikację nawet w przypadku częściowego nakładania się klas. Metoda ma potencjał zastosowania w praktycznych systemach rozpoznawania obiektów powietrznych opartych na danych radarowych lub fuzji czujników.
EN
This paper presents a study on the use of Principal Component Analysis (PCA) combined with Support Vector Machines (SVM) for the classification of airborne objects based on kinematic parameters. Synthetic datasets representing different aerial objects, such as airplanes, drones, birds, and balloons, were generated using statistical distributions of flight features, including average height, velocity, acceleration, and trajectory length. PCA was applied to reduce dimensionality and visualize data separability, while SVM was employed as a supervised learning classifier in the reduced feature space. The results show that the PCA-SVM combination enables effective classification even when class distributions partially overlap. The method demonstrates potential for practical implementation in radar-based or sensor fusion systems for aerial object identification.
2
EN
The study explored the performance of vowel recognition using an acoustic model built on Audio Fingerprint techniques [1]. The research compares the performance of Support Vector Machines (SVMs), Hidden Markov Models (HMMs), Artificial Neural Networks (ANNs) and k-Nearest Neighbours (k-NN) classifiers in the recognition of isolated and within-word vowels and investigates the importance of different types of acoustic speech features in this process. Temporal, spectral, cepstral, formant, LPC and perceptual features of speech were examined. Importance of features was tested using a random forest classifier. Vowel classification was tested at three confidence levels for feature importance: 90%, 95% and 99%. Two author databases consisting of a total of 1,200 samples from 20 speakers, recorded under household conditions, were used. The classifiers were evaluated by confusion matrix, accuracy, precision, sensitivity and F1 score. A segmentation of words into speech sounds was carried out using a tool based on BiLSTM recurrent neural networks and the BIC criterion. Three most important features were determined: power spectral density, spectral cut-off, and Power-Normalised Cepstral Coefficients. In the isolated vowel recognition task, the SVM classifier was the most effective with a feature significance confidence level of 95% obtaining accuracy = 81%, precision = 81%, sensitivity = 81%, F1 score = 80%. In the task of recognising a vowel within a word, it was verified if the algorithm detected the presence of vowels in the correct segment and if it recognised the correct vowel within it. The best results were obtained by the k-NN classifier (statistical confidence level of feature importance of 99.9%). However, these results were low, correct recognition of the vowel in the word: A, E, U: 20%, I, O: 7%, Y: 23%. This indicates strong influence of the neighbourhood of other speech sounds in speech on the acoustic model of vowels and their recognition.
3
Content available remote Advanced AI tools for predicting mechanical properties of self-compacting concrete
EN
The present study utilizes advanced numerical evaluation techniques like Artificial Intelligence (AI), including Support Vector Machines (SVM), Artificial Neural Networks (ANN), Adaptive Neuro-Fuzzy Inference Systems with Genetic Algorithms (ANFIS-GA), Gene Expression Programming (GEP), and Multiple Linear Regression (MLR) to develop and compare the predictive models for determination of compressive and tensile strength. Partial mutual information for selection and establishment of the degree of association of variables was used to aid in better attainment of results obtained through predictive models. It was observed that amongst the modeling techniques, the results obtained for compressive strength through the SVM technique were excellent, producing an Index of Agreement of 0.96, Akaike Information Criterion of 68.33, skill score of 0.96, and symmetric uncertainty of 0.93, thus indicating a simpler, robust, and low uncertainty predictive model. Furthermore, the adapted technique MLR was found to predict tensile strength characteristics better, with the MLR model demonstrating a higher R2 value of 0.81, thus implying a reliable tensile strength prediction model. However, SVM consistently performed well for both compressive and tensile strength characteristics thus endorsing the reliability of the predictive model. Overall, the study aids in getting new insights about improvising the strength properties of SCC and its evaluation through predictive techniques.
EN
Reference evapotranspiration (ETo) is a critical water resource management parameter, including irrigation scheduling and crop water requirements. Because large uncertainties in estimating ETo can result in equally large uncertainties in determining water budgets and crop water requirements, and vice versa, accurate determination of ETo can be challenging when direct measurement and estimation with the Penman-Monteith (FAO-56-PM) semi-empirical equation of the food and agriculture organization (FAO) is not possible. Indeed, this study explores the use of the support vector regression machine learning algorithm (SVR) to predict daily ETo with limited measured inputs. It is the first time that Julian Day (J) is included as an input to improve prediction accuracy. Ten years of meteorological data collected at the Dar-El-Beidha weather station in Algeria are used, with maximum, minimum, and mean air temperatures (TM, tm, and T), mean relative humidity (RH), mean wind speed (u2), and sunshine duration (n) as inputs, as well as J and extraterrestrial solar radiation (Ra) as auxiliary variables, and the ETo-FAO-56-PM values as target outputs. Several SVR models are developed using different combinations of inputs, and their performance is assessed relative to ETo-FAO-56-PM values. Empirical equations are also used for comparison, and several evaluation metrics are employed, including root mean square error (RMSE), mean absolute percentage error (MAPE), determination coefficient (R2), RMSE-standard deviation ratio (RSR), Nash-Sutcliffe efficiency coefficient (NSE), and Willmott’s refined index (WI). The results show that the SVR models utilizing limited meteorological inputs in addition to J and/or Ra predicted ETo accurately and outperformed their corresponding estimates using empirical equations, radial basis function neural networks (RBFNN), and adaptive neuro-fuzzy inference systems (ANFIS) models obtained in previous studies. The RMSE ranged from 0.28 to 0.72 mm/day, R2 from 0.86 to 0.98, MAPE from 7 to 19%, RSR from 0.15 to 0.38, NSE from 0.86 to 0.98, and WI from 0.65 to 0.87. These findings could provide useful solutions for ETo estimation issues in areas with sparse data and agro-climatic conditions similar to those of Dar-El-Beidha.
EN
This study proposes a framework to develop a high-resolution snow cover area (SCA) product from freely available spaceborne remote sensing data and utilizes the Sentinel-1 multi-temporal products and MODIS surface reflectance data. The proposed methodology focuses on using the sensitivity of the parameters retrievable from the Sentinel-1 datasets to snow. Different parameters such as the dual polarimetric entropy, mean scattering angle, backscatter coefficients, and the interferometric coherence are integrated with a spatially resampled normalized difference snow index (NDSI) from MODIS data to estimate an equivalent NDSI, which is used for the determination of the SCA at 15 m spatial resolution. The equivalent NDSI is derived using a machine learning-based regression based on support vector machines (SVMs) and the multilayer perceptron (MLP). The experiments are performed for the high elevated regions of the Kunduz and Khanabad watershed of the northern Hindu Kush mountains for the peak winter and early melt season of 2019, corresponding to February and March. The reference SCA for evaluating the results is generated by thresholding the NDSI derived from pan-sharpened Landsat-8 imagery. As compared to MLP, the SCA generated based on the SVM regression showed better performance. Further, compared to spatially resampled MODIS NDSI, both the SVM and MLP results showed better accuracy for snow classification, as determined by the mean conditional kappa coefficients of 0.75, 0.83, respectively, over 0.62.
6
Content available Outlier detection in EEG signals
EN
In this paper, the topic of detection of outliers in EEG signals was discussed, which facilitates making decisions about the diagnosis of a patient based on this study. We used two methods to detect outliers: the support vector machine and the k nearest neighbors method. The experiments were performed on a publicly available dataset containing EEG test results for 500 patients. The obtained results showed that the methods we used allow for the outlier detection efficiency at the level of 93%.
PL
W niniejszej pracy podjęto temat detekcji wyjątków w sygnałach EEG, co pozwala na ułatwienie podejmowania decyzji co do diagnozy pacjenta na podstawie tego badania. Do detekcji wyjątków wykorzystaliśmy dwie metody: maszynę wektorów nośnych i metodę k najblizszych sąsiadów. Eksperymenty zostały przeprowadzone na ogólnodostępnym zbiorze danych zawieraj ącym wyniki badania EEG dla 500 pacjentów. Uzyskane wyniki pokazały, że u żyte przez nas metody pozwalają na uzyskanie skuteczności detekcji wyjątków na poziomie 93%.
EN
Radar Target Detection (RTD) is a critical aspect of modern radar systems that have widespread use in both civil and military fields. However, detecting targets in clutter and unfavorable conditions is challenging with conventional signal processing approaches such as Constant False Alarm Rate (CFAR). The harsh and complex environments in radar measurements make the target detection problem even more challenging when using traditional methods. Therefore, developing a reliable and robust RTD technique is crucial. This paper proposes an approach that incorporates Machine Learning (ML) with conventional methods to detect, separate, and classify real targets from noisy backgrounds in a real radar dataset by employing Fuzzy C-means (FCM) clustering to segment the Range Doppler Map (RDM) image into targets and background, then a feature extraction technique based on gray-level co-occurrence matrix (GLCM) and classify the targets using a support vector machine (SVM). The approach is based on an augmented Doppler Filter Bank (DFB) with RDM images and has been tested on a Frequency Modulated Continuous Wave (FMCW) radar mounted on an Unmanned Aerial Vehicle (UAV) for detecting ground targets. A flight was conducted in a challenging environment to evaluate the proposed system's performance. The experimental results demonstrate that the proposed approach outperforms existing methods in terms of classification accuracy. The proposed approach is also computationally efficient and can be easily implemented in real time systems and has great potential in improving RTD performance in various applications.
PL
Radarowe wykrywanie celów (RTD) to krytyczny aspekt nowoczesnych systemów radarowych, które są szeroko stosowane zarówno w zastosowaniach cywilnych, jak i wojskowych. Jednak wykrywanie celów w bałaganie i niesprzyjających warunkach jest trudne przy konwencjonalnych metodach przetwarzania sygnału, takich jak stała częstość fałszywych alarmów (CFAR). Trudne i złożone środowiska w pomiarach radarowych sprawiają, że problem wykrywania celu staje się jeszcze większym wyzwaniem przy użyciu tradycyjnych metod. Dlatego kluczowe znaczenie ma opracowanie niezawodnej i solidnej techniki BRT. W tym artykule zaproponowano podejście, które łączy uczenie maszynowe (ML) z konwencjonalnymi metodami wykrywania, oddzielania i klasyfikowania rzeczywistych celów z hałaśliwego tła w prawdziwym zbiorze danych radarowych poprzez zastosowanie klastrowania rozmytych średnich C (FCM) w celu segmentacji mapy Range Doppler (RDM) ) na cele i tło, a następnie technikę ekstrakcji cech opartą na macierzy współwystępowania na poziomie szarości (GLCM) i klasyfikować cele za pomocą maszyny wektorów nośnych (SVM). Podejście to opiera się na rozszerzonym banku filtrów dopplerowskich (DFB) z obrazami RDM i zostało przetestowane na radarze fali ciągłej z modulacją częstotliwości (FMCW) zamontowanym na bezzałogowym statku powietrznym (UAV) w celu wykrywania celów naziemnych. Przeprowadzono lot w trudnym środowisku, aby ocenić wydajność proponowanego systemu. Wyniki eksperymentów pokazują, że proponowane podejście przewyższa istniejące metody pod względem dokładności klasyfikacji. Proponowane podejście jest również wydajne obliczeniowo i może być łatwo zaimplementowane w systemach czasu rzeczywistego oraz ma ogromny potencjał w zakresie poprawy wydajności RTD w różnych zastosowaniach.
EN
Sorting coal and gangue is important in raw coal production; accurately identifying coal and gangue is a prerequisite for effectively separating coal and gangue. The method of extracting coal and gangue using image grayscale information can effectively identify coal and gangue, but the recognition rate of the sorting process based on image grayscale information needs to substantially higher than that which is needed to meet production requirements. A sorting method of coal and gangue using object surface grayscale-gloss characteristics is proposed to improve the recognition rate of coal and gangue. Using different comparative experiments, bituminous coal from the Huainan area was used as the experimental object. It was found that the number of pixel points corresponding to the highest level grey value of the grayscale moment and illumination component of the coal and gangue images were combined into a total discriminant value and used as input for the best classification of coal and gangue using the GWO-SVM classification model. The recognition rate could reach up to 98.14%. This method sorts coal and gangue by combining surface greyness and glossiness features, optimizes the traditional greyness-based recognition method, improves the recognition rate, makes the model generalizable, enriches the research on coal and gangue recognition, and has theoretical and practical significance in enterprise production operations.
PL
Sortowanie węgla i skały płonnej jest ważne w produkcji węgla surowego; dokładna identyfikacja węgla i skały płonnej jest warunkiem wstępnym skutecznego oddzielenia tych surowców. Metoda rozdzielenia węgla i skały płonnej przy użyciu informacji w skali szarości obrazu może skutecznie identyfikować węgiel i skałę płonną, ale stopień rozpoznawania procesu sortowania w oparciu o te informacje być znacznie wyższy niż wymagany do spełnienia wymagań produkcyjnych. W artykule zaproponowano metodę sortowania węgla i skały płonnej wykorzystującą charakterystykę połysku i skali szarości powierzchni obiektu w celu poprawy szybkości rozpoznawania węgla i skały płonnej. W badaniach wykorzystano próbki węgla kamiennego z obszaru Huainan. Stwierdzono, że liczbę punktów pikseli odpowiadającą najwyższemu poziomowi szarości momentu w skali szarości i składowej oświetlenia obrazów węgla i skały płonnej połączono w całkowitą wartość dyskryminującą i wykorzystano jako dane wejściowe dla najlepszej klasyfikacji węgla i skały płonnej przy użyciu modelu klasyfikacji GWO-SVM. Wskaźnik rozpoznawalności może osiągnąć nawet 98,14%. Ta metoda sortowania węgla i skały płonnej poprzez połączenie cech szarości i połysku powierzchni, optymalizuje tradycyjną metodę rozpoznawania w oparciu o szarość, poprawia współczynnik rozpoznawania, umożliwia uogólnienie modelu, wzbogaca badania nad rozpoznawaniem węgla i skały płonnej, ma znaczenie teoretyczne i praktyczne w operacjach produkcyjnych przedsiębiorstwa.
EN
This study focuses on the problem of mapping impervious surfaces in urban areas and aims to use remote sensing data and orthophotos to accurately classify and map these surfaces. Impervious surface indices and green space assessments are widely used in land use and urban planning to evaluate the urban environment. Local governments also rely on impervious surface mapping to calculate stormwater fees and effectively manage stormwater runoff. However, accurately determining the size of impervious surfaces is a significant challenge. This study proposes the use of the Support Vector Machines (SVM) method, a pattern recognition approach that is increasingly used in solving engineering problems, to classify impervious surfaces. The research results demonstrate the effectiveness of the SVM method in accurately estimating impervious surfaces, as evidenced by a high overall accuracy of over 90% (indicated by the Cohen’s Kappa coefficient). A case study of the “Parkowo-Leśne” housing estate in Warsaw, which covers an area of 200,000 m², shows the successful application of the method. In practice, the remote sensing imagery and SVM method allowed accurate calculation of the area of the surface classes studied. The permeable surface represented about 67.4% of the total complex and the impervious surface corresponded to the remaining 32.6%. These results have implications for stormwater management, pollutant control, flood control, emergency management, and the establishment of stormwater fees for individual properties. The use of remote sensing data and the SVM method provides a valuable approach for mapping impervious surfaces and improving urban land use management.
PL
Niniejsze badanie koncentruje się na problemie wyznaczania powierzchni nieprzepuszczalnych na obszarach miejskich i ma na celu wykorzystanie danych teledetekcyjnych i ortofotomap do dokładnej klasyfikacji i wizualizacji tych powierzchni. Wskaźniki powierzchni nieprzepuszczalnych i oceny terenów zielonych są szeroko stosowane w planowaniu przestrzennym i urbanistycznym do oceny środowiska miejskiego. Władze lokalne polegają również na oszacowaniu wielkości powierzchni nieprzepuszczalnych w celu obliczania opłat za wodę deszczową i skutecznego zarządzania odpływem wody deszczowej. Jednak dokładne określenie wielkości nieprzepuszczalnych powierzchni jest poważnym wyzwaniem. W niniejszym badaniu zaproponowano wykorzystanie metody Support Vector Machines (SVM), podejścia opartego na rozpoznawaniu wzorców, które jest coraz częściej stosowane w rozwiązywaniu problemów inżynieryjnych, do klasyfikacji powierzchni nieprzepuszczalnych. Wyniki badań pokazują skuteczność metody SVM w dokładnym szacowaniu powierzchni nieprzepuszczalnych, o czym świadczy wysoka ogólna precyzja wynosząca ponad 90% (na co wskazuje współczynnik Kappa Cohena). Studium przypadku osiedla „Parkowo-Leśne” w Warszawie o powierzchni 200 000 m² pokazuje skuteczne zastosowanie metody. Wyniki wskazują, że powierzchnie przepuszczalne stanowiły około 67,4% całego kompleksu, podczas gdy powierzchnie nieprzepuszczalne stanowiły pozostałe 32,6%. Wyniki te mogą mieć wpływ na zarządzanie wodami opadowymi, kontrolę zanieczyszczeń, zapobieganie powodziom, zarządzanie kryzysowe i ustalanie opłat za wodę opadową dla poszczególnych nieruchomości. Wykorzystanie danych teledetekcyjnych i metody SVM zapewnia cenne podejście do wizualizacji powierzchni nieprzepuszczalnych i poprawy zarządzania użytkowaniem gruntów miejskich.
EN
Hearing is one of the most crucial senses for all humans. It allows people to hear and connect with the environment, the people they can meet and the knowledge they need to live their lives to the fullest. Hearing loss can have a detrimental impact on a person's quality of life in a variety of ways, ranging from fewer educational and job opportunities due to impaired communication to social withdrawal in severe situations. Early diagnosis and treatment can prevent most hearing loss. Pure tone audiometry, which measures air and bone conduction hearing thresholds at various frequencies, is widely used to assess hearing loss. A shortage of audiologists might delay diagnosis since they must analyze an audiogram, a graphic representation of pure tone audiometry test results, to determine hearing loss type and treatment. In the presented work, several AI-based models were used to classify audiograms into three types of hearing loss: mixed, conductive, and sensorineural. These models included Logistic Regression, Support Vector Machines, Stochastic Gradient Descent, Decision Trees, RandomForest, Feedforward Neural Network (FNN), Convolutional Neural Network (CNN), Graph Neural Network (GNN), and Recurrent Neural Network (RNN). The models were trained using 4007 audiograms classified by experienced audiologists. The RNN architecture achieved the best classification performance, with an out-of-training accuracy of 94.46%. Further research will focus on increasing the dataset and enhancing the accuracy of RNN models.
EN
Purpose: In this study, the artificial intelligence techniques namely Artificial Neural Network, Random Forest, and Support Vector Machine are employed for PM 2.5 modelling. The study is carried out in Rohtak city of India during paddy stubble burning months i.e., October and November. The different models are compared to check their respective efficacies and also sensitivity analysis is performed to know about the most vital parameter in PM 2.5 modelling. Design/methodology/approach: The air pollution data of October and November months from the year 2016 to 2020 was collected for the study. The months of October and November are chosen as paddy stubble burning and major festivities using fireworks occur during these months. The untoward data entries viz. zero values, blank data, etc. were eliminated from the gathered data set and thereafter 231 observations of each parameter were left for the conduct of the presented study. The different models i.e., ANN, RF, SVM, etc. had PM 2.5 as an output variable while relative humidity, sulfur dioxide, nitrogen dioxide, nitric oxide, carbon monoxide, ozone, temperature, solar radiation, wind direction and wind speed acted as input variables. The prototypes created from the training data set are verified on the testing data set. A sensitivity analysis is also done to quantify impact of various parameters on output variable i.e., PM 2.5. Findings: The performance of the SVM_RBF based model turned out to be the best with the performance parameters being the coefficient of determination, root mean square error, and mean absolute error. In the sensitivity test, sulphur dioxide (SO2) was adjudged as the most vital variable. Research limitations/implications: The quantification capacity of the generated models may go beyond the used data set of observations. Practical implications: The artificial intelligence techniques provide precise estimation and forecasting of PM 2.5 in the air during paddy stubble burning months of October and November. Originality/value: Unlike the past research work that focus on modelling of various air pollution parameters, this study in specific focuses on the modelling of most vital air pollutant i.e., PM 2.5 that too specifically during the paddy stubble burning months of October and November when the air pollution is at its peak in northern India.
EN
Standard time is a key indicator to measure the production efficiency of the sewing department, and it plays a vital role in the production forecast for the apparel industry. In this article, the grey correlation analysis was adopted to identify seven sources as the main influencing factors for determination of the standard time in the sewing process, which are sewing length, stitch density, bending stiffness, fabric weight, production quantity, drape coefficient, and length of service. A novel forecasting model based on support-vector machine (SVM) with particle swarm optimization (PSO) is then proposed to predict the standard time of the sewing process. On the ground of real data from a clothing company, the proposed forecasting model is verified by evaluating the performance with the squared correlation coefficient (R2) and mean square error (MSE). Using the PSO-SVM method, the R2 and MSE are found to be 0.917 and 0.0211, respectively. In conclusion, the high accuracy of the PSO-SVM method presented in this experiment states that the proposed model is a reliable forecasting tool for determination of standard time and can achieve good predicted results in the sewing process.
EN
The cost overrun in road construction projects in Iraq is one of the major problems that face the construction of new roads. To enable the concerned government agencies to predict the final cost of roads, the objective this paper suggested is to develop an early cost estimating model for road projects using a support vector machine based on (43) sets of bills of quantity collected in Baghdad city in Iraq. As cost estimates are required at the early stages of a project, consideration was given to the fact that the input data for the support vector machine model could be easily extracted from sketches or the project's scope definition. The data were collected from contracts awarded by the Mayoralty of Baghdad for completed projects between 2010-2013. Mathematical equations were constructed using the Support Vector Machine Algorithm (SMO) technique. An average of accuracy (AA) (99.65%) and coefficient of determination (R2) (97.63%) for the model was achieved by the created prediction equations.
PL
Celem pracy jest ocena jakości modelu oparta na Maszynie Wektorów Nośnych SVM pod kątem jej przydatności w wirtualnym uruchomieniu - do zastosowania na potrzeby wirtualnego bliźniaka. Przedstawione wyniki badań są ściśle skorelowane z Przemysłem 4.0, którego główną ideą jest integracja inteligentnych maszyn, systemów i informatyki. Jednym z celów jest wprowadzenie możliwości elastycznej zmiany asortymentu oraz zmian w systemach produkcyjnych. Wirtualne uruchomienie może zostać użyte do stworzenia modelu symulacyjnego obiektu, na potrzeby szkolenia operatorów. Jednym z działów wirtualnego rozruchu jest cyfrowy bliźniak. Jest to wirtualna reprezentacja instalacji lub urządzenia, czy też maszyny. Dzięki zastosowaniu wirtualnego bliźniaka, możliwe jest odwzorowanie różnych procesów w celu obniżenia kosztów procesu i przyspieszenia procesu testowania. W pracy zaproponowano współczynnik oceny jakości modelu oparty na SVM. Współczynnik ten bierze pod uwagę wiedzę ekspercką oraz metody używane do oceny jakości modelu - Znormalizowany Błąd Średniokwadratowy NRMSE (ang. Normalized Root Mean Square Error) oraz Znormalizowany Maksymalny Błąd ME (ang. Maximum Error). Wspomniane metody są powszechnie stosowane do oceny jakości modelu, jednak dotychczas nie były używane równocześnie. W każdej z metod uwzględniany jest inny aspekt dotyczący modelu. Zaproponowany współczynnik umożliwia podjęcie decyzji, czy dany model może zostać użyty do stworzenia wirtualnego bliźniaka. Takie podejście pozwala na testowanie modeli w sposób automatyczny lub półautomatyczny.
EN
This paper proposes a model quality assessment method based on Support Vector Machine, which can be used to develop a digital twin. This work is strongly connected with Industry 4.0, in which the main idea is to integrate machines, devices, systems, and IT. One of the goals of Industry 4.0 is to introduce flexible assortment changes. Virtual commissioning can be used to create a simulation model of a plant or conduct training for maintenance engineers. One branch of virtual commissioning is a digital twin. The digital twin is a virtual representation of a plant or a device. Thanks to the digital twin, different scenarios can be analyzed to make the testing process less complicated and less time-consuming. The goal of this work is to propose a coefficient that will take into account expert knowledge and methods used for model quality assessment (such as Normalized Root Mean Square Error - NRMSE, Maximum Error - ME). NRMSE and ME methods are commonly used for this purpose, but they have not been used simultaneously so far. Each of them takes into consideration another aspect of a model. The coefficient allows deciding whether the model can be used for digital twin appliances. Such an attitude introduces the ability to test models automatically or in a semi-automatic way.
EN
Squirrel cage induction motors suffer from numerous faults, for example cracks in the rotor bars. This paper aims to present a novel algorithm based on Least Squares Support Vector Machine (LS-SVM) for detection partial rupture rotor bar of the squirrel cage asynchronous machine. The stator current spectral analysis based on FFT method is applied in order to extract the fault frequencies related to rotor bar partial rupture. Afterward the LS-SVM approach is established as monitoring system to detect the degree of rupture rotor bar. The training and testing data sets used are derived from the spectral analysis of one stator phase current, containing information about characteristic harmonics related to the partial rupture rotor bar. Satisfactory and more accurate results are obtained by applying LS-SVM to fault diagnosis of rotor bar.
EN
Recently, the analysis of medical imaging is gaining substantial research interest, due to advancements in the computer vision field. Automation of medical image analysis can significantly improve the diagnosis process and lead to better prioritization of patients waiting for medical consultation. This research is dedicated to building a multi-feature ensemble model which associates two independent methods of image description: textural features and deep learning. Different algorithms of classification were applied to single-phase computed tomography images containing 8 subtypes of renal neoplastic lesions. The final ensemble includes a textural description combined with a support vector machine and various configurations of Convolutional Neural Networks. Results of experimental tests have proved that such a model can achieve 93.6% of weighted F1-score (tested in 10-fold cross validation mode). Improvement of performance of the best individual predictor totalled 3.5 percentage points.
EN
This study offers two Support Vector Machine (SVM) models for fault detection and fault classification, respectively. Different short circuit events were generated using a 154 kV transmission line modeled in MATLAB/Simulink software. Discrete Wavelet Transform (DWT) is performed to the measured single terminal current signals before fault detection stage. Three level wavelet energies obtained for each of three-phase currents were used as input features for the detector. After fault detection, half cycle (10 ms) of three-phase current signals was recorded by 20 kHz sampling rate. The recorded currents signals were used as input parameters for the multi class SVM classifier. The results of the validation tests have demonstrated that a quite reliable, fault detection and classification system can be developed using SVM. Generated faults were used to training and testing of the SVM classifiers. SVM based classification and detection model was fully implemented in MATLAB software. These models were comprehensively tested under different conditions. The effects of the fault impedance, fault inception angle, mother wavelet, and fault location were investigated. Finally, simulation results verify that the offered study can be used for fault detection and classification on the transmission line.
EN
In this paper, support vector machines (SVMs), least squares SVMs (LSSVMs), relevance vector machines (RVMs), and probabilistic classification vector machines (PCVMs), are compared on sixteen binary and multiclass medical datasets. Particular emphasis is put on the comparison among the commonly used Gaussian radial basis function (GRBF) kernel, and the relatively new generalized min–max (GMM) kernel and exponentiated-GMM (eGMM) kernel. Since most medical decisions involve uncertainty, a postprocessing approach based on Platt’s method and pairwise coupling is employed to produce probabilistic outputs for prediction uncertainty assessment. The extensive empirical study illustrates that the SVM classifier using the tuning-free GMM kernel (SVM-GMM) shows good usability and broad applicability, and exhibits competitive performance against some state-of-the-art methods. These results indicate that SVM-GMM can be used as the first-choice method when selecting an appropriate kernel-based vector machine for medical diagnosis. As an illustration, SVM-GMM efficiently achieves a high accuracy of 98.92% on the thyroid disease dataset consisting of 7200 samples.
EN
The present research applies six empirical, three statistical, and two soft computing methods to predict water saturation of an oil reservoir. The employed empirical models are ‘Archie (Trans AIME 146(1):54–62, 1942),’ ‘DeWitte (Oil Gas J 49(16):120–134, 1950),’ ‘Poupon et al. (J Petrol Technol 6(6):27–34, 1954),’ ‘Simandoux (Revue deI’Institut Francais du.Petrol, 1963),’ ‘Poupon and Leveaux (1971),’ and ‘Schlumberger (Log interpretation principles/applications, p. 235, 7th printing. Houston, 1998)’; statistical methods are ‘multiple variable regression,’ ‘fine tree, medium tree, coarse tree-based regression tree,’ and ‘bagged tree, boosted tree-based tree ensembles’; and soft computing techniques are ‘support vector machine (SVM)’ and ‘Levenberg–Marquardt (LM), Bayesian regularization (BR), and scaled conjugate gradient (SCG)- based artificial neural network (ANN).’ In addition, log variables are ranked based on their significance in water saturation modeling. To achieve the goals, 521 data points are selected from three wells. Each data point has laboratory-derived core water saturation information and six well log features, such as gamma ray (GR), bulk density (RHOB), sonic travel time (DT), true resistivity (LLD), neutron porosity (φN), and Depth. Statistical indexes, namely regression coefficient, mean squared error, root mean squared error, average absolute percentage error, minimum absolute error percentage, and maximum absolute error percentage, are used to compare the prediction efficiency of study methods. Results show that the empirical models provide exceedingly poor prediction efficiency. Within the study models, fine tree, medium tree-based regression tree; bagged tree, boosted tree-based tree ensembles; fine Gaussian SVM; ANN with LM; and ANN with BR are very efficient predictive strategies. The log ranking reveals that GR and DT are the most important, whereas RHOB and φN are the least vital predictor variables in water saturation prediction.
EN
In this paper, we compare the following machine learning methods as classifiers for sentiment analysis: k – nearest neighbours (kNN), artificial neural network (ANN), support vector machine (SVM), random forest. We used a dataset containing 5,000 movie reviews in which 2,500 were marked as positive and 2,500 as negative. We chose 5,189 words which have an influence on sentence sentiment. The dataset was prepared using a term document matrix (TDM) and classical multidimensional scaling (MDS). This is the first time that TDM and MDS have been used to choose the characteristics of text in sentiment analysis. In this case, we decided to examine different indicators of the specific classifier, such as kernel type for SVM and neighbour count in kNN. All calculations were performed in the R language, in the program R Studio v 3.5.2. Our work can be reproduced because all of our data sets and source code are public.
first rewind previous Strona / 3 next fast forward last
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.