Customer churn prediction is used to retain customers at the highest risk of churn by proactively engaging with them. Many machine learning-based data mining approaches have been previously used to predict client churn. Although, single model classifiers increase the scattering of prediction with a low model performance which degrades reliability of the model. Hence, Bag of learners based Classification is used in which learners with high performance are selected to estimate wrongly and correctly classified instances thereby increasing the robustness of model performance. Furthermore, loss of interpretability in the model during prediction leads to insufficient prediction accuracy. Hence, an Associative classifier with Apriori Algorithm is introduced as a booster that integrates classification and association rule mining to build a strong classification model in which frequent items are obtained using Apriori Algorithm. Also, accurate prediction is provided by testing wrongly classified instances from the bagging phase using generated rules in an associative classifier. The proposed models are then simulated in Python platform and the results achieved high accuracy, ROC score, precision, specificity, F-measure, and recall.
2
Dostęp do pełnego tekstu na zewnętrznej witrynie WWW
One of the major common assaults in the current Internet of things (IoT) network-based healthcare infrastructures is distributed denial of service (DDoS). The most challenging task in the current environment is to manage the creation of vast multimedia data from the IoT devices, which is difficult to be handled solely through the cloud. As the software defined networking (SDN) is still in its early stages, sampling-oriented measurement techniques used today in the IoT network produce low accuracy, increased memory usage, low attack detection, higher processing and network overheads. The aim of this research is to improve attack detection accuracy by using the DPTCM-KNN approach. The DPTCMKNN technique outperforms support vector machine (SVM), yet it still has to be improved. For healthcare systems, this work develops a unique approach for detecting DDoS assaults on SDN using DPTCM-KNN.
In this paper, we propose using a propeller modulation on the transmitted signal (called sonar micro-Doppler) and different support vector machine (SVM) kernels for automatic recognition of moving sonar targets. In general, the main challenge for researchers and craftsmen working in the field of sonar target recognition is the lack of access to a valid and comprehensive database. Therefore, using a comprehensive mathematical model to simulate the signal received from the target can respond to this challenge. The mathematical model used in this paper simulates the return signal of moving sonar targets well. The resulting signals have unique properties and are known as frequency signatures. However, to reduce the complexity of the model, the 128-point fast Fourier transform (FFT) is used. The selected SVM classification is the most popular machine learning algorithm with three main kernel functions: RBF kernel, linear kernel, and polynomial kernel tested. The accuracy of correctly recognizing targets for different signal-to-noise ratios (SNR) and different viewing angles was assessed. Accuracy detection of targets for different SNRs (−20, −15, −10, −5, 0, 5, 10, 15, 20) and different viewing angles (10, 20, 30, 40, 50, 60, 70, 80) is evaluated. For a more fair comparison, multilayer perceptron neural network with two back-propagation (MLP-BP) training methods and gray wolf optimization (MLP-GWO) algorithm were used. But unfortunately, considering the number of classes, its performance was not satisfactory. The results showed that the RBF kernel is more capable for high SNRs (SNR = 20, viewing angle = 10) with an accuracy of 98.528%.
4
Dostęp do pełnego tekstu na zewnętrznej witrynie WWW
In this paper, four common winding faults in power transformers (axial displacement (AD), serial capacitance variation (VSC), ground capacitance variation (VGC), open circuit (OC)) are simulated on a transformer winding model to classify the fault type, location and extent, by applying an intelligent methodology for diagnosing transformer faults, depends on building a comprehensive database by collecting Frequency Responses Analysis (FRA) related to health and faulty conditions and analyzing them using statistical and mathematical indicators, this base that can inventory all possible faults in terms of location and extent, which is used to train a support vector machine (SVM) classifier on the faults included in it, which is then able to classify any new data . The results of the tests showed that the proposed method is characterized by high accuracy in detecting the type of defect, determining its location and the extent of its occurrence, It also contributes to the development of the application of machine learning on transformers.
PL
W tym artykule symulowane są cztery typowe uszkodzenia uzwojeń w transformatorach mocy (przemieszczenie osiowe (AD), szeregowa zmiana pojemności (VSC), zmiana pojemności uziemienia (VGC), obwód otwarty (OC)) na modelu uzwojenia transformatora w celu sklasyfikowania typu zwarcia , lokalizacji i zasięgu, poprzez zastosowanie inteligentnej metodologii diagnozowania uszkodzeń transformatorów, polega na zbudowaniu kompleksowej bazy danych poprzez zbieranie Analizy Odpowiedzi Częstotliwości (FRA) związanej ze stanami zdrowia i wadliwymi oraz analizowanie ich za pomocą wskaźników statystycznych i matematycznych, tej bazy, która może inwentaryzować wszystkie możliwych błędów pod względem lokalizacji i zasięgu, który jest używany do trenowania klasyfikatora maszyny wektora nośnego (SVM) na zawartych w nim błędach, który jest następnie w stanie sklasyfikować dowolne nowe dane. Wyniki badań wykazały, że proponowana metoda charakteryzuje się dużą dokładnością w wykrywaniu rodzaju defektu, określaniu jego lokalizacji oraz zasięgu jej występowania, przyczynia się również do rozwoju zastosowania uczenia maszynowego na transformatorach.
Purpose: In this study, the artificial intelligence techniques namely Artificial Neural Network, Random Forest, and Support Vector Machine are employed for PM 2.5 modelling. The study is carried out in Rohtak city of India during paddy stubble burning months i.e., October and November. The different models are compared to check their respective efficacies and also sensitivity analysis is performed to know about the most vital parameter in PM 2.5 modelling. Design/methodology/approach: The air pollution data of October and November months from the year 2016 to 2020 was collected for the study. The months of October and November are chosen as paddy stubble burning and major festivities using fireworks occur during these months. The untoward data entries viz. zero values, blank data, etc. were eliminated from the gathered data set and thereafter 231 observations of each parameter were left for the conduct of the presented study. The different models i.e., ANN, RF, SVM, etc. had PM 2.5 as an output variable while relative humidity, sulfur dioxide, nitrogen dioxide, nitric oxide, carbon monoxide, ozone, temperature, solar radiation, wind direction and wind speed acted as input variables. The prototypes created from the training data set are verified on the testing data set. A sensitivity analysis is also done to quantify impact of various parameters on output variable i.e., PM 2.5. Findings: The performance of the SVM_RBF based model turned out to be the best with the performance parameters being the coefficient of determination, root mean square error, and mean absolute error. In the sensitivity test, sulphur dioxide (SO2) was adjudged as the most vital variable. Research limitations/implications: The quantification capacity of the generated models may go beyond the used data set of observations. Practical implications: The artificial intelligence techniques provide precise estimation and forecasting of PM 2.5 in the air during paddy stubble burning months of October and November. Originality/value: Unlike the past research work that focus on modelling of various air pollution parameters, this study in specific focuses on the modelling of most vital air pollutant i.e., PM 2.5 that too specifically during the paddy stubble burning months of October and November when the air pollution is at its peak in northern India.
This work present an efficient hardware architecture of Support Vector Machine (SVM) for the classification of Hyperspectral remotely sensed data using High Level Synthesis (HLS) method. The high classification time and power consumption in traditional classification of remotely sensed data is the main motivation for this work. Therefore presented work helps to classify the remotely sensed data in real-time and to take immediate action during the natural disaster. An embedded based SVM is designed and implemented on Zynq SoC for classification of hyperspectral images. The data set of remotely sensed data are tested on different platforms and the performance is compared with existing works. Novelty in our proposed work is extend the HLS based FPGA implantation to the onboard classification system in remote sensing. The experimental results for selected data set from different class shows that our architecture on Zynq 7000 implementation generates a delay of 11.26 μs and power consumption of 1.7 Watts, which is extremely better as compared to other Field Programmable Gate Array (FPGA) implementation using Hardware description Language (HDL) and Central Processing Unit (CPU) implementation.
Clothing image in the e-commerce industry plays an important role in providing customers with information. This paper divides clothing images into two groups: pure clothing images and dressed clothing images. Targeting small and medium-sized clothing companies or merchants, it compares traditional machine learning and deep learning models to determine suitable models for each group. For pure clothing images, the HOG+SVM algorithm with the Gaussian kernel function obtains the highest classification accuracy of 91.32% as compared to the Small VGG network. For dressed clothing images, the CNN model obtains a higher accuracy than the HOG+SVM algorithm, with the highest accuracy rate of 69.78% for the Small VGG network. Therefore, for end-users with only ordinary computing processors, it is recommended to apply the traditional machine learning algorithm HOG+SVM to classify pure clothing images. The classification of dressed clothing images is performed using a more efficient and less computationally intensive lightweight model, such as the Small VGG network.
This research aims to propose an effective model for the detection of defective Printed Circuit Boards (PCBs) in the output stage of the Surface-Mount Technology (SMT) line. The emphasis is placed on increasing the classification accuracy, reducing the algorithm training time, and a further improvement of the final product quality. This approach combines a feature extraction technique, the Principal Component Analysis (PCA), and a classification algorithm, the Support Vector Machine (SVM), with previously applied Automated Optical Inspection (AOI). Different types of SVM algorithms (linear, kernels and weighted) were tuned to get the best accuracy of the resulting algorithm for separating good-quality and defective products. A novel automated defect detection approach for the PCB manufacturing process is proposed. The data from the real PCB manufacturing process were used for this experimental study. The resulting PCALWSVM model achieved 100 % accuracy in the PCB defect detection task. This article proposes a potentially unique model for accurate defect detection in the PCB industry. A combination of PCA and LWSVM methods with AOI technology is an original and effective solution. The proposed model can be used in various manufacturing companies as a postprocessing step for an SMT line with AOI, either for accurate defect detection or for preventing false calls.
Computer aided detection systems are used for the provision of second opinion during lung cancer diagnosis. For early-stage detection and treatment false positive reduction stage also plays a vital role. The main motive of this research is to propose a method for lung cancer segmentation. In recent years, lung cancer detection and segmentation of tumors is considered one of the most important steps in the surgical planning and medication preparations. It is very difficult for the researchers to detect the tumor area from the CT (computed tomography) images. The proposed system segments lungs and classify the images into normal and abnormal and consists of two phases, The first phase will be made up of various stages like pre-processing, feature extraction, feature selection, classification and finally, segmentation of the tumor. Input CT image is sent through the pre-processing phase where noise removal will be taken care of and then texture features are extracted from the pre-processed image, and in the next stage features will be selected by making use of crow search optimization algorithm, later artificial neural network is used for the classification of the normal lung images from abnormal images. Finally, abnormal images will be processed through the fuzzy K-means algorithm for segmenting the tumors separately. In the second phase, SVM classifier is used for the reduction of false positives. The proposed system delivers accuracy of 96%, 100% specificity and sensitivity of 99% and it reduces false positives. Experimental results shows that the system outperforms many other systems in the literature in terms of sensitivity, specificity, and accuracy. There is a great tradeoff between effectiveness and efficiency and the proposed system also saves computation time. The work shows that the proposed system which is formed by the integration of fuzzy K-means clustering and deep learning technique is simple yet powerful and was effective in reducing false positives and segments tumors and perform classification and delivers better performance when compared to other strategies in the literature, and this system is giving accurate decision when compared to human doctor’s decision.
10
Dostęp do pełnego tekstu na zewnętrznej witrynie WWW
Parkinson’s disease (PD) is the most common neurological disorder that typically affects elderly people. In the earlier stage of disease, it has been seen that 90% of the patients develop voice disorders namely hypokinetic dysarthria. As time passes, the severity of PD increases, and patients have difficulty performing different speech tasks. During the progression of the disease, due to less control of articulatory organs such as the tongue, jaw, and lips, the quality of speech signals deteriorates. Periodic medical evaluations are very important for PD patients; however, having access to a medical appointment with a neurologist is a privilege in most countries. Considering that the speech recording process is inexpensive and very easy to do, we want to explore in this paper the suitability of mapping information of the dysarthria level into the neurological state of patients and vice versa. Three levels of severity are considered in a multiclass framework using time-frequency (TF) features and random-forest along with an Error-Correcting Output Code (ECOC) approach. The multiclass classification task based on dysarthria level is performed using the TF features with words and diadochokinetic (DDK) speech tasks. The developed model shows an unweighted average recall (UAR) of 68.49% with the DDK task /pakata/ based on m-FDA level, and 48.8% with the word /petaka/ based on the UPDRS level using the Random Forest classifier. With the aim, to evaluate the neurological states using the dysarthria level, the developed models are used to predict the MDS-UPDRS-III level of patients. The highest matching accuracy of 32% with the word /petaka/ is achieved. Similarly, the multiclass classification framework based on MDS-UPDRS-III is applied to predict the dysarthria level of patients. In this case, the highest matching accuracy of 18% was obtained with the DDK tasks /pataka/.
Complex gaps may be formed when carrying out live working in substations, while the discharge characteristics of complex gaps are different from those of single gaps. This paper focuses on the prediction of critical 50% positive switching impulse breakdown voltage (𝑈50,crit+) of phase-to-phase complex gaps formed in 220 kV substations. Firstly, several electric field features were defined on the shortest discharge path of the complex gap to reflect the electric field distribution. Then support vector machine (SVM) prediction models were established according to the connection between electric field distribution and breakdown voltage. Finally, the 𝑈50,crit+ data of the complex gap were obtained through twice electric field calculations and predictions. The prediction results show that the minimum 𝑈50,crit+ of phase-to-phase complex gaps is 1147 kV, and the critical position is 0.9 m away from the high voltage conductor, accounting for 27% of the whole gap. Both critical position and voltage are in good agreement with the values provided in IEC 61472.
12
Dostęp do pełnego tekstu na zewnętrznej witrynie WWW
Precise and reliable runoff forecasting is crucial for water resources planning and management. The present study was conducted to test the applicability of different data-driven techniques including artificial neural networks (ANN), support vector machine (SVM), random forest (RF) and M5P models for runoff forecasting for the lead time of 1 day and 2 days in the Koyna River basin, India. The best input variables for the development of the models were selected by applying the Gamma test (GT). Two different scenarios were considered to select the input variables for 2 days ahead runoff forecasting. In the first scenario, the output of 1 day ahead runoff (t+1) was not used as an input while it was also used as an input along with other input variables for the development of the models in the second scenario. For 2 days ahead runoff forecasting, the models developed by adopting the second scenario performed more accurately than that of the first scenario. The RF model performed the best for 1 day ahead runoff forecasting with root mean square error (RMSE), coefficient of efficiency (CE), correlation coefficient (r) and coefficient of determination (R2 ) values of 168.94 m3 /s, 0.67, 0.84 and 0.704, respectively, during the test period. For 2 days ahead runoff forecasting, RF and ANN models performed the best in the first and second scenario, respectively. In 2 days ahead runoff forecasting, RMSE, CE, r and R2 values were observed to be 169.72 m3 /s, 0.67, 0.84, 0.7023 and 148.55 m3 /s, 0.74, 0.87, 0.76 in the first and second scenarios, respectively, during the test period. Finally, the results revealed that the addition of 1 day ahead runoff forecast increased the forecast accuracy of 2 days ahead runoff forecasts. In addition, the dependability of the various models was determined using the uncertainty analysis.
Transient stability assessment is an integral part of dynamic security assessment of power systems. Traditional methods of transient stability assessment, such as time domain simulation approach and direct methods, are appropriate for offline studies and thus, cannot be applied for online transient stability prediction, which is a major requirement in modern power systems. This motivated the requirement to apply an artificial intelligence-based approach. In this regard, supervised machine learning is beneficial for predicting transient stability status, in the presence of uncertainties. Therefore, this paper examines the application of a binary support vector machine-based supervised machine learning, for predicting the transient stability status of a power system, considering uncertainties of various factors, such as load, faulted line, fault type, fault location and fault clearing time. The support vector machine is trained using a Gaussian Radial Basis function kernel and its hyperparameters are optimized using Bayesian optimization. Results obtained for the IEEE 14-bus test system demonstrated that the proposed method offers a fast technique for probabilistic transient stability status prediction, with an excellent accuracy. DIgSILENT PowerFactory and MATLAB was utilized for transient stability time-domain simulations (for obtaining training data for support vector machine) and for applying support vector machine, respectively.
Celem pracy jest ocena jakości modelu oparta na Maszynie Wektorów Nośnych SVM pod kątem jej przydatności w wirtualnym uruchomieniu - do zastosowania na potrzeby wirtualnego bliźniaka. Przedstawione wyniki badań są ściśle skorelowane z Przemysłem 4.0, którego główną ideą jest integracja inteligentnych maszyn, systemów i informatyki. Jednym z celów jest wprowadzenie możliwości elastycznej zmiany asortymentu oraz zmian w systemach produkcyjnych. Wirtualne uruchomienie może zostać użyte do stworzenia modelu symulacyjnego obiektu, na potrzeby szkolenia operatorów. Jednym z działów wirtualnego rozruchu jest cyfrowy bliźniak. Jest to wirtualna reprezentacja instalacji lub urządzenia, czy też maszyny. Dzięki zastosowaniu wirtualnego bliźniaka, możliwe jest odwzorowanie różnych procesów w celu obniżenia kosztów procesu i przyspieszenia procesu testowania. W pracy zaproponowano współczynnik oceny jakości modelu oparty na SVM. Współczynnik ten bierze pod uwagę wiedzę ekspercką oraz metody używane do oceny jakości modelu - Znormalizowany Błąd Średniokwadratowy NRMSE (ang. Normalized Root Mean Square Error) oraz Znormalizowany Maksymalny Błąd ME (ang. Maximum Error). Wspomniane metody są powszechnie stosowane do oceny jakości modelu, jednak dotychczas nie były używane równocześnie. W każdej z metod uwzględniany jest inny aspekt dotyczący modelu. Zaproponowany współczynnik umożliwia podjęcie decyzji, czy dany model może zostać użyty do stworzenia wirtualnego bliźniaka. Takie podejście pozwala na testowanie modeli w sposób automatyczny lub półautomatyczny.
EN
This paper proposes a model quality assessment method based on Support Vector Machine, which can be used to develop a digital twin. This work is strongly connected with Industry 4.0, in which the main idea is to integrate machines, devices, systems, and IT. One of the goals of Industry 4.0 is to introduce flexible assortment changes. Virtual commissioning can be used to create a simulation model of a plant or conduct training for maintenance engineers. One branch of virtual commissioning is a digital twin. The digital twin is a virtual representation of a plant or a device. Thanks to the digital twin, different scenarios can be analyzed to make the testing process less complicated and less time-consuming. The goal of this work is to propose a coefficient that will take into account expert knowledge and methods used for model quality assessment (such as Normalized Root Mean Square Error - NRMSE, Maximum Error - ME). NRMSE and ME methods are commonly used for this purpose, but they have not been used simultaneously so far. Each of them takes into consideration another aspect of a model. The coefficient allows deciding whether the model can be used for digital twin appliances. Such an attitude introduces the ability to test models automatically or in a semi-automatic way.
The water quality index (WQI) is an essential indicator to manage water usage properly. This study aimed at applying a machine learning-based approach integrating attribute-realization (AR) and support vector machine (SVM) algorithm to classify the Chao Phraya River’s water quality. The historical monitoring dataset during 2008-2019 including biological oxygen demand (BOD), conductivity (Cond), dissolved oxygen (DO), faecal coliform bacteria (FCB), total coliform bacteria (TCB), ammonia (NH3-N), nitrate (NO3-N), salinity (Sal), suspended solids (SS), total nitrogen (TN), total dissolved solids (TDS), and turbidity (Turb), were processed via four studied steps: data pre-processing by means substituting method, contributing parameter evaluation by recognition pattern study, examination of the mathematic functions for quality classification, and validation of obtained approach. The results showed that NH3-N, TCB, FCB, BOD, DO, and Sal were the main attributes contributing orderly to water quality classification with confidence values of 0.80, 0.79, 0.78, 0.76, 0.69, and 0.64, respectively. Linear regression was the most suitable function to river water data classification than Sigmoid, Radial basis and Polynomial. The different number of attributes and mathematic functions promoted the different classification performance and accuracy. The validation confirmed that AR-SVM was a potent approach application to classify river water’s quality with 0.86-0.95 accuracy when applied three to six attributes.
Land cover mapping of marshland areas from satellite images data is not a simple process, due to the similarity of the spectral characteristics of the land cover. This leads to challenges being encountered with some land covers classes, especially in wetlands classes. In this study, satellite images from the Sentinel 2B by ESA (European Space Agency) were used to classify the land cover of Al Hawizeh marsh/Iraq Iran border. Three classification methods were used aimed at comparing their accuracy, using multispectral satellite images with a spatial resolution of 10 m. The classification process was performed using three different algorithms, namely: Maximum Likelihood Classification (MLC), Artificial Neural Networks (ANN), and Support Vector Machine (SVM). The classification algorithms were carried out using ENVI 5.1 software to detect six land cover classes: deep water marsh, shallow water marsh, marsh vegetation (aquatic vegetation), urban area (built up area), agriculture area, and barren soil. The results showed that the MLC method applied to Sentinel 2B images provides a higher overall accuracy and the kappa coefficient compared to the ANN and SVM methods. Overall accuracy values for MLC, ANN, and SVM methods were 85.32%, 70.64%, and 77.01% respectively.
The mafic and ultramafic rocks of Mettupalayam belong to the southern granulite terrain of India, which is concomitant with vital economic resources. The advantage of Advanced Spaceborne Thermal Emission and Reflectance Radiometer (ASTER) data for mapping the litho units are exploited well here for differentiating the rock units with the aid of band combination (1, 3, 6), principal component analysis (5, 1, 6) and band ratioed band combination (2/3, 3/2, 1/5 and (9–8)/1, (8–6)/2, and (9–6)/3). As part of the field study, the collection of samples and ground control points were carried out and in addition to that, the generation of laboratory reflectance spectra for samples was achieved. The Spectral Angle Mapper (SAM) and Support Vector Machine (SVM) were performed using ASTER data with the aid of spectra obtained from the laboratory conditions to demarcate the abundance of mafic and ultramafic rocks of the area. The XRF method was used to retrieve the major oxides of the field-collected samples and the spectral absorption characters are validated with it. The results show a vibrant interpretation of the litho units.
Context: Predicting the priority of bug reports is an important activity in software maintenance. Bug priority refers to the order in which a bug or defect should be resolved. A huge number of bug reports are submitted every day. Manual filtering of bug reports and assigning priority to each report is a heavy process, which requires time, resources, and expertise. In many cases mistakes happen when priority is assigned manually, which prevents the developers from finishing their tasks, fixing bugs, and improve the quality. Objective: Bugs are widespread and there is a noticeable increase in the number of bug reports that are submitted by the users and teams’ members with the presence of limited resources, which raises the fact that there is a need for a model that focuses on detecting the priority of bug reports, and allows developers to find the highest priority bug reports. This paper presents a model that focuses on predicting and assigning a priority level (high or low) for each bug report. Method: This model considers a set of factors (indicators) such as component name, summary, assignee, and reporter that possibly affect the priority level of a bug report. The factors are extracted as features from a dataset built using bug reports that are taken from closed-source projects stored in the JIRA bug tracking system, which are used then to train and test the framework. Also, this work presents a tool that helps developers to assign a priority level for the bug report automatically and based on the LSTM’s model prediction. Results: Our experiments consisted of applying a 5-layer deep learning RNN-LSTM neural network and comparing the results with Support Vector Machine (SVM) and K-nearest neighbors (KNN) to predict the priority of bug reports. The performance of the proposed RNN-LSTM model has been analyzed over the JIRA dataset with more than 2000 bug reports. The proposed model has been found 90% accurate in comparison with KNN (74%) and SVM (87%). On average, RNN-LSTM improves the F-measure by 3% compared to SVM and 15.2% compared to KNN. Conclusion: It concluded that LSTM predicts and assigns the priority of the bug more accurately and effectively than the other ML algorithms (KNN and SVM). LSTM significantly improves the average F-measure in comparison to the other classifiers. The study showed that LSTM reported the best performance results based on all performance measures (Accuracy = 0.908, AUC = 0.95, F-measure = 0.892).
This study offers two Support Vector Machine (SVM) models for fault detection and fault classification, respectively. Different short circuit events were generated using a 154 kV transmission line modeled in MATLAB/Simulink software. Discrete Wavelet Transform (DWT) is performed to the measured single terminal current signals before fault detection stage. Three level wavelet energies obtained for each of three-phase currents were used as input features for the detector. After fault detection, half cycle (10 ms) of three-phase current signals was recorded by 20 kHz sampling rate. The recorded currents signals were used as input parameters for the multi class SVM classifier. The results of the validation tests have demonstrated that a quite reliable, fault detection and classification system can be developed using SVM. Generated faults were used to training and testing of the SVM classifiers. SVM based classification and detection model was fully implemented in MATLAB software. These models were comprehensively tested under different conditions. The effects of the fault impedance, fault inception angle, mother wavelet, and fault location were investigated. Finally, simulation results verify that the offered study can be used for fault detection and classification on the transmission line.
20
Dostęp do pełnego tekstu na zewnętrznej witrynie WWW
Objectives: The key test in Content-Based Medical Image Retrieval (CBMIR) frameworks for MRI (Magnetic Resonance Imaging) pictures is the semantic hole between the low-level visual data caught by the MRI machine and the elevated level data seen by the human evaluator. Methods: The conventional component extraction strategies centre just on low-level or significant level highlights and utilize some handmade highlights to diminish this hole. It is important to plan an element extraction structure to diminish this hole without utilizing handmade highlights by encoding/consolidating low-level and elevated level highlights. The Fleecy gathering is another packing technique, which is applied in plan depiction here and SVM (Support Vector Machine) is applied. Remembering the predefinition of bunching amount and enlistment cross-section is until now a significant theme, a new predefinition advance is extended in this paper, in like manner, and another CBMIR procedure is suggested and endorsed. It is essential to design a part extraction framework to diminish this opening without using painstakingly gathered features by encoding/joining low-level and critical level features. Results: SVM and FCM (Fuzzy C Means) are applied to the power structures. Consequently, the incorporate vector contains all the objectives of the image. Recuperation of the image relies upon the detachment among request and database pictures called closeness measure. Conclusions: Tests are performed on the 200 Image Database. Finally, exploratory results are evaluated by the audit and precision.
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.