Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników

Znaleziono wyników: 11

Liczba wyników na stronie
first rewind previous Strona / 1 next fast forward last
Wyniki wyszukiwania
Wyszukiwano:
w słowach kluczowych:  przetwarzanie wstępne
help Sortuj według:

help Ogranicz wyniki do:
first rewind previous Strona / 1 next fast forward last
EN
Digital mammography acts as a unique screening technology to protect the lives of females against breast cancer for the past few decades. Mammographic breast density is a well-known biomarker and plays a substantial role in breast cancer prediction and treatments. Breast density is calculated based on the opacity of fibro-glandular tissue reflected on digital mammograms concerning the whole area of the breast. The opacity of pectoral muscle and fibro-glandular tissue is similar to each other; hence, the small presence of the pectoral muscle in the breast area can hamper the accuracy of breast density classification. Successful removal of pectoral muscle is challenging due to changes in shape, size, and texture of pectoral muscle in every MLO and LMO views of mammogram. In this article, the depth-first search (DFS) algorithm is proposed to remove artifacts and pectoral muscle from digital mammograms. In the proposed algorithm, image enhancement is performed to improve the pixel quality of the input image. The whole breast as a single connected component is identified from the background region to remove the artifacts and tags. The depth-first search method with and without the heuristic approach is used to delineate the pectoral muscle, and then final suppression is performed on it. This algorithm is tested on 2675 images of the DDSM dataset, which is further divided into four density classes as per BIRADs classification. Segmentation results are calculated individually on each BIRADs density class of the DDSM dataset. Results are validated subjectively by the expert’s Radiologist’s ground truth with segmentation accuracy and objectively by the Jaccard coefficient and a dice similarity coefficient. This algorithm is found robust on each density class and provides overall segmentation accuracy of 86.18%, a mean value of Jaccard index, and a Dice similarity coefficient of 0.9315 and 0.9548, respectively. The experimental results show that the proposed algorithms applied for pectoral muscle removal follow the ground truth marked by an expert radiologist. The proposed algorithm can be part of the pre-processing unit of breast density measurement and breast cancer detection system used during clinical practice.
EN
Accurate modeling of groundwater level (GWL) is a critical and challenging issue in water resources management. The GWL fuctuations rely on many nonlinear hydrological variables and uncertain factors. Therefore, it is important to use an approach that can reduce the parameters involved in the modeling process and minimize the associated errors. This study presents a novel approach for time series structural analysis, multi-step preprocessing, and GWL modeling. In this study, we identifed the time series deterministic and stochastic terms by employing a one-, two-, and three-step preprocessing techniques (a combination of trend analysis, standardization, spectral analysis, diferencing, and normalization techniques). The application of this approach is tested on the GWL dataset of the Kermanshah plains located in the northwest region of Iran, using monthly observations of 60 piezometric stations from September 1991 to August 2017. By removing the dominant nonstationary factors of the GWL data, a linear model with one autoregressive and one seasonal moving average parameter, detrending, and consecutive non-seasonal and seasonal diferencing were created. The quantitative assessment of this model indicates the high performance in GWL forecasting with the coefcient of determination (R2 ) 0.94, scatter index (SI) 0.0004, mean absolute percentage error (MAPE) 0.0003, root mean squared relative error (RMSRE) 0.0004, and corrected Akaike’s information criterion (AICc) 151. Moreover, the uncertainty and accuracy of the proposed linear-based method are compared with two conventional nonlinear methods, including multilayer perceptron artifcial neural network (MLP-ANN) and adaptive neuro-fuzzy inference systems (ANFIS). The uncertainty of the proposed method in this study was±0.105 compared to±0.114 and±0.126 for the best results of the ANN and the ANFIS models, respectively.
EN
Thermal ablation surgery serves as one of the main approaches to treat liver tumors. The pretreatment planning, which highly demands the experience and ability of the physician, plays a vital role in thermal ablation surgery. The planning of multiple puncturing is necessary for avoiding the possible interference, destroying the tumor thoroughly and minimizing the damage to healthy tissue. A GPU-independent pretreatment planning method is proposed based on multi-objective optimization, which takes the most comprehensive constraints into consideration. An adaptive decision method of closing kernel size based on Jenks Natural Breaks is utilized to describe the final feasible region more accurately. It should be noted that the reasonable procedure of solving the feasible region and the use of KD tree based high dimensional search approach are used to enhance the computational efficiency. Seven constraints are handled within 7 s without GPU acceleration. The Pareto front points of nine puncturing tests are obtained in 5 s by using the NSGA-II algorithm. To evaluate the maximum difference and similarity between the planning results and the puncturing points recommended by the physician, Hausdorff distance and overlap rate are respectively developed, the Hausdorff distances are within 30 mm in seven out of nine tests and the average value of overlap rate is 73.0% for all the tests. The puncturing paths of high safety and clinical-practice compliance can be provided by the proposed method, based on which the pretreatment planning software developed can apply to the interns' training and ability evaluating for thermal ablation surgery.
EN
One of the methods for recovery and utilization of waste products from the poultry industry is to subject them to the methane fermentation process in the biogas plant. These are waste with a high content of fatty compounds and proteins, including keratin. Their specificity is characterized by rapid possibility of spoilage, rancidity and problems of further management. These wastes are characterized by varying degrees of complexity, thus their use as a raw material for the biogas fermenter should be preceded by a pre-treatment. An example of waste generated in poultry processing is biological sludge. Optimizing this material with highly enzymatic fungi could accelerate the degradation of the organic matter contained and, as a result, increase the energy efficiency of this type of waste. Quantitative and qualitative parameters of biogas produced from biological sludge processed by isolated filamentous fungi with high metabolic potential were determined. Laboratory tests were based on the modified methodology included in the standards DIN 38414-S8 and VDI 4630. Based on the results obtained, it was found that the pre-optimization of biological sludge by fungal strains with different metabolic potential, influences on the yield of biogas production, including methane. There was an increase in the biogas yield from the biological sludge processed by the mixed fungal consortium (by 20 %) and the strain marked as F1 (by 14 %) as compared to the non-inoculated material, which was also reflected in the amount of methane produced in the case of the mixed fungal consortium (by 28 %) and the strain marked as F1 (by 12 %).
EN
The article concerns the problem of the selected sign language letters in the form of images classification. The impact of the image preprocessing methods as adaptive thresholding or edge detection is tested. In addition, the influence of the found shapes filling is checked, as well as centering the hands on the images. The following classification methods were chosen: SVM classifier with linear kernel function, Naive Bayes and Random Forests. The accuracy, F-measure, the AUC, MAE and Kappa coefficient were reported as measures of classification quality.
PL
Artykuł dotyczy klasyfikacji wybranych liter alfabetu migowego w postaci obrazów. Badany jest wpływ na wyniki kilku metod przetwarzania wstępnego obrazów, w tym progowania adaptacyjnego oraz detekcji krawędzi. Dodatkowo sprawdzane jest wypełnianie znalezionych kształtów, a także centrowanie dłoni na obrazach. Jako metody klasyfikacji wybrane zostały: klasyfikator SVM z liniową funkcją jądrową, klasyfikator Naive Bayes oraz Random Forest. Jako miary jakości klasyfikacji raportowane są jakość klasyfikacji, miara F, pole pod krzywą ROC oraz współczynnik Kappa.
EN
The goal of our work was an initial preprocessing of dermoscopic images towards accurate lesion border detection. Four algorithms were proposed and analyzed: MS – algorithm using mean shift clustering, HE – algorithm using histogram equalization, TTH – algorithm using the top-hat transform, PCA – algorithm using principal component analysis. Those algorithms were tested on PH2 images database that contains 200 dermoscopic images, each with a mask of the lesion. Those algorithms were optimized using lesion mask from database and Jaccard index as a measure of similarity of both sets. Simple statistical analysis of indexes was used to compare proposed algorithms in term of their accuracy.
PL
W artykule poruszono problem wstępnego przetwarzania obrazów dermatoskopowych w celu znalezienia konturu znamienia. Zaproponowano i porównano cztery algorytmy: MS – wykorzystujący klasteryzację ‘mean shift’, HE – wykorzystujący wyrównywanie histogramu, TTH – wykorzystujący transformację ‘top-hat’, PCA – wykorzystujący metodę analizy głównych składowych. Algorytmy przetestowano z wykorzystaniem obrazów z bazy PH2, zawierającej 200 obrazów wraz z obrysem ręcznym, a ich parametry dobrano optymalizując indeks Jaccarda. Proste statystyki wyników pozwoliły na porównanie proponowanych algorytmów.
PL
Celem pracy była analiza wpływu różnych metod wstępnego przetwarzania danych wejściowych, takich jak np. średnia ruchoma, wyrównywanie wykładnicze, filtr 4253H, na jakość prognoz godzinowego zapotrzebowania na energię elektryczną opracowanych metodami regresyjnymi. Cel pracy zrealizowano na podstawie badań własnych wykonanych w rozdzielni nN, zlokalizowanej na terenie nowoczesnej ubojni drobiu w południowej części Małopolski. Wykonane analizy skupień metodą k-średnich i metodą EM pokazały, że ze względu na podobieństwo przebiegu godzinowego zapotrzebowania na energię elektryczną optymalny będzie podział dni tygodnia na 3 skupienia, tj. dni robocze, dni poprzedzające dzień wolny od pracy oraz dni wolne od pracy, i budowa trzech niezależnych modeli. W zastosowaniach praktycznych najważniejszym parametrem oceny modeli jest sumaryczna wartość rzeczywistej ilości energii bilansującej ΔESR. Dla większości budowanych modeli na bazie zmiennych przekształconych zaobserwowano zmniejszenie wartości wskaźnika ΔESR względem modeli budowanych w oparciu o zmienną egzogeniczną nieprzekształconą. Największe, ponad 6% zmniejszenie wartości analizowanego wskaźnika uzyskano w modelu III dla zmiennej wejściowej wygładzonej oknem Daniela o rozpiętości 5. Ze względu na najniższą wartość sumarycznej ilości energii bilansującej w zastosowaniach praktycznych powinny być jednak preferowane modele budowane na bazie szeregu czasowego godzinowego zużycia energii elektrycznej dla całego zakładu wygładzonego filtrem 4253H.
EN
The objective of this study was to analyse the influence of different methods of preprocessing of the input data, such as moving average, exponential smoothing, filter 4253H on the quality of forecasts of hourly demand for electricity developed with regression methods. The objective of the study was carried out on the basis of own research carried out in the nN switchboard, located on the territory of a modern poultry slaughterhouse in the southern part of Małopolska region. The cluster analysis carried out with k-means and the EM method has shown that due to the similarity of the course of hourly demand for electricity division of weekdays into three days of cluster that is, working days, days preceding the days off, days off and construction of three independent models will be optimal. The total value of the actual amount of balancing energy ΔESR is the most important parameter of the models assessment in the practical applications. For majority of models constructed on the basis of the transformed variables, the decrease in the rate ΔESR towards models constructed based on exogenous not transformed variable was reported. The largest over 6% reduction in the value of the analysed indicator was obtained in model III for the input variable smoothed with 5th span Daniel window. Due to the lowest value of the total amount of balancing energy in practical applications, models built on the basis of a time series of hourly electricity consumption for the entire plant smoothed filter 4253H should be preferred.
EN
In the present article, an attempt is made to derive optimal data-driven machine learning methods for forecasting an average daily and monthly rainfall of the Fukuoka city in Japan. This comparative study is conducted concentrating on three aspects: modelling inputs, modelling methods and pre-processing techniques. A comparison between linear correlation analysis and average mutual information is made to find an optimal input technique. For the modelling of the rainfall, a novel hybrid multi-model method is proposed and compared with its constituent models. The models include the artificial neural network, multivariate adaptive regression splines, the k-nearest neighbour, and radial basis support vector regression. Each of these methods is applied to model the daily and monthly rainfall, coupled with a pre-processing technique including moving average and principal component analysis. In the first stage of the hybrid method, sub-models from each of the above methods are constructed with different parameter settings. In the second stage, the sub-models are ranked with a variable selection technique and the higher ranked models are selected based on the leave-one-out cross-validation error. The forecasting of the hybrid model is performed by the weighted combination of the finally selected models.
EN
The paper presents the classification performance of an automatic classifier of the electrocardiogram (ECG) for the detection abnormal beats with new concept of feature extraction stage. Feature sets were based on ECG morphology and RR-intervals. This paper compares two strategies for classification of annotated QRS complexes: based on original ECG morphology features and proposed new approach - based on preprocessed ECG morphology features. The mathematical morphology filtering and wavelet trans-form is used for the preprocessing of ECG signal. Within this framework, the problem of choosing an appropriate structuring element in mathematical morphology filtering in signal processing was studied. Configuration adopted a Kohonen self-organizing maps (SOM) and Support Vector Machine (SVM) for analysis of signal features and clustering. In this study, a classifiers was developed with LVQ and SVM algorithms using the data from the records recommended by ANSI/AAMI EC57 standard. The performance of the algorithm is evaluated on the MIT-BIH Arrhythmia Database following the AAMI recommendations. Using this method the results of identify beats either as normal or arrhythmias was improved.
PL
Artykuł prezentuje nowe podejście do problemu klasyfikacji zapisów ECG w celu detekcji zachowań chorobowych. Podstawą koncepcji fazy ekstrakcji cech jest proces przetwarzania wstępnego sygnału ECG z wykorzystaniem morfologii matematycznej oraz innych transformacji. Morfologia matematyczna bazując na teorii zbiorów, pozwala zmienić charakterystyczne elementy sygnału. Dwie podstawowe operacje: dylatacja i erozja pozwalają na uwydatnienie lub redukcję wielkości i kształtu określonych elementów w danych. Parametry charakterystyki zapisów ECG stanowią bazę dla wektora cech. Do klasyfikacji przebiegów ECG w pracy wykorzystano samoorganizujące się mapy (SOM) Kohonena z klasyfikatorem LVQ oraz algorytm Support Vector Machines (SVM). Eksperymenty przeprowadzono klasyfikując sygnały pomiędzy trzynaście kategorii rekomendowanych przez standard ANSI/AAMI EC57, to jest: prawidłowy rytm serca i 12 arytmii. Zaproponowany w artykule algorytm opiera się na wykorzystaniu elementarnych operacji morfologii matematycznej i ich kombinacji. Ocenę wyników eksperymentów przeprowadzono na sygnałach z bazy MIT/BIH. Na tej podstawie zaproponowano wyjściową architekturę bloku filtrów morfologicznych dla celów ekstrakcji cech oraz unifikacji wejściowego sygnału ECG jako danych wejściowych do budowy wektora cech.
10
Content available Combined off-line type signature recognition method
EN
In this paper the off-line type signature analysis have been presented. The signature recognition is composed of some features. Different influences of such features were tested and stated. Proposed approach gives good signature recognition level, hence described method can be used in many areas, for example in biometric authentication, as biometric computer protection or as method of the analysis of person's behaviour changes.
PL
Celem pracy było opracowanie uniwersalnej techniki redukcji szumów w barwnych obrazach cyfrowych wykorzystujących ideę ścieżek cyfrowych będących trajektoriami wirtualnej cząstki błądzącej po kracie obrazu. Zaproponowane filtry wykorzystują ogólną strukturę filtru rozmytego z funkcją przynależności liczoną na podstawie kosztu połączenia pikseli przez ścieżki cyfrowe. W pracy przedstawiony został szeroki przegląd technik stosowanych do celów poprawy jakości obrazów cyfrowych, przeprowadzono też porównanie efektywności nowych algorytmów ze znanymi z literatury. Zamieszczono także analizę złożoności proponowanych filtrów.
EN
In this thesis formulation of the novel class of multichannel filters for color image low-level processing is introduced and evaluated. The new filter class exploits all topological connections between adjacent image pixels using the concept of digital paths, which can be seen as trajectories of virtual particle performing random walk on the image lattice. The class of filters introduced in this thesis utilizes fuzzy membership functions defined over vectorial inputs connected via digital paths. The efficiency of the new filters is compared, under a variety of performance criteria, to that of commonly used techniques, such as the vector median, the generalized vector directional filter and with anisotropic diffusion approach. It is shown that, compared to existing techniques, the filters introduced here are better able to suppress impulsive, Gaussian as well as mixed type noise in color digital images.
first rewind previous Strona / 1 next fast forward last
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.