Ograniczanie wyników
Czasopisma help
Autorzy help
Lata help
Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników

Znaleziono wyników: 176

Liczba wyników na stronie
first rewind previous Strona / 9 next fast forward last
Wyniki wyszukiwania
Wyszukiwano:
w słowach kluczowych:  segmentation
help Sortuj według:

help Ogranicz wyniki do:
first rewind previous Strona / 9 next fast forward last
EN
The article examines the current issues of beer production related to the yeast foam formation during fermentation, and discusses the importance of controlling the fermentation stages to ensure the proper product quality. Computer vision technologies were applied to identify the stages of the main fermentation. Based on the analysis of the computer vision algorithms, the K-means method was used for image clustering. The systematic description of the algorithm for detecting contaminated foam based on the K-means method is provided.
PL
W artykule omówiono bieżące problemy produkcji piwa związane z powstawaniem piany drożdżowej podczas fermentacji oraz omówiono znaczenie kontrolowania etapów fermentacji w celu zapewnienia odpowiedniej jakości produktu. Zastosowano technologie wizji komputerowej w celu zidentyfikowania etapów głównej fermentacji. Na podstawie analizy algorytmów wizji komputerowej do klasteryzacji obrazów zastosowano metodę K-means. Przedstawiono systematyczny opis algorytmu wykrywania zanieczyszczonej piany w oparciu o metodę K-means.
EN
Detectionand segmentation of civilian aircraft from satellite imagery has significant importance in applications for air traffic management, surveillance, and defense. Yet, its visual confusions and lack of unification in recognition make it hard. This paper presents that by developing an efficient YOLOv8-based model for aircraft detection, classification, and segmentation within the FAIR1M-2.0 dataset. This proposed methodology involves dataset preprocessing and compatibility adjustments where the backbone used is CSPDarknet53 combining with the C2f module, which provides an efficient multi-scale representation, this happens to be the most critical requirement in distinguishing between among 11 unique categories of aircraft. Including the SAM model helps improve localization precision by achieving more accurate pixel-level segmentation. The present work effectively carried out an accurateclassification and described civilian aircraft, containing the enhanced detection and quantification capability appropriate for complex satellite-oriented aircraft analysis. These reasons make the work satisfy the fundamental requirement for very accurateidentification and evaluation of aerial images.The approach improves the accuracy and precision of aircraft classification over delicate satellite images, and thus is useful in operations for real-time surveillance and monitoring. Fine-grained classification and segmentation would then be able to effectively capture slight differences between aircraft types, which are now vital to the reliable management of airspaces. This work, therefore sets a good foundation for future development and advancement of high-resolution aerial analysis in diverse operational settings.
PL
Wykrywanie i segmentacja cywilnych samolotów na podstawie obrazów satelitarnych mają kluczowe znaczenie w zarządzaniu ruchem lotniczym, nadzorze oraz obronności. Ze względu na wizualne podobieństwa między różnymi typami samolotów oraz brak standaryzacji w rozpoznawaniu, jest to zadanie trudne. Niniejszy artykuł przedstawia efektywny model oparty na YOLOv8 do wykrywania, klasyfikacji i segmentacji samolotów w zbiorze danych FAIR1M-2.0. Zaproponowana metodologia obejmuje wstępne przetwarzanie danych i dostosowanie do zgodności, w którym wykorzystano CSPDarknet53 jako bazę, połączoną z modułem C2f, co zapewnia efektywną reprezentację wieloskalową–jest to kluczowy element przy rozróżnianiu 11 unikalnych kategorii samolotów. Włączenie modelu SAM poprawia precyzję lokalizacji, pozwalając na dokładniejszą segmentację na poziomie pikseli. Prezentowane badania pozwoliły na dokładną klasyfikację i opisanie cywilnych samolotów, zapewniając ulepszone możliwości wykrywania i analizowania obiektów na obrazach satelitarnych. Takie podejście znacznie zwiększa dokładność i precyzję klasyfikacji samolotów, co czyni je przydatnym w operacjach nadzoru i monitorowania w czasie rzeczywistym. Precyzyjna klasyfikacja i segmentacja umożliwia skuteczne rozróżnianie subtelnych różnic między typami samolotów, co jest istotne dla niezawodnego zarządzania przestrzenią powietrzną. Niniejsza praca stanowi solidną podstawę dlaprzyszłych badań nad analizą obrazów lotniczych w wysokiej rozdzielczości w różnych kontekstach operacyjnych.
EN
River segmentation is important in delivering essential information for environmental analytics such as water management, flood/disaster management, observations of climate change, or human activities. Advances in remote-sensing technology have provided more complex features that limit the traditional approaches’ effectiveness. This work uses deep-learning-based models to enhance river extractions from satellite imagery. With Resnet-50 as the backbone network, CNN U-Net and DeepLabv3+ were utilized to perform the river segmentation of the Sentinel-1 C-Band synthetic aperture radar (SAR) imagery. The SAR data was selected due to its capability to capture surface details regardless of weather conditions, with VV+VH band polarizations being employed to improve water surface reflectivity. A total of 1080 images were utilized to train and test the models. The models’ performance was measured using the Dice coefficient. The CNN U-Net architecture achieved an accuracy of 0.94, while DeepLabv3+ attained an accuracy of 0.92. Although DeepLabv3+ showed more stability during the training and performed better on wider rivers, CNN U-Net excelled at identifying narrow rivers. In conclusion, a river-segmentation model was conducted using Sentinel-1 C-Band SAR data, with CNN U-Net outperforming DeepLabv3+; this enabled detailed river mapping for irrigationand flood-monitoring applications – particularly in cloud-prone tropical regions.
EN
The advent of deep learning enabled the extraction of complex feature representations from medical imaging data, which was considered impossible to be achieved with standard computer learning. The applications of deep learning in the field of medical image analysis afford significant results. A key feature of deep learn- ing techniques is their ability to automatically learn task-specific feature representations and extract relevant features without hu- man intervention. Various deep learning models, including CNN, AlexNet, ResNet, DenseNet and U-Net were developed for medical image analysis. Among these models, U-Net is a popular model, used for medical image segmentation. The present article provides a comprehensive review of the deep learning segmentation models, which use U-Net and its variants, applied in the domain of medical image segmentation, specifically tailored to medical imaging modal- ities, such as ultrasound and MRI, along with respective pros and cons in the field of image segmentation. The analysis reveals that the performance of different U-Net variants varies significantly based on imaging modality and segmentation complexity.
EN
he paper presents a deep learning-based approach for text segmentation from images, utilizing a combination of a Fully Convolutional Network (FCN) and a Recurrent Neural Network (RNN). The algorithm achieves high accuracy in identifying and separating text regions from nontext regions, performing well with diverse text styles, fonts, backgrounds, and various languages. It outperforms state-of-the-art methods and proves to be a robust and versatile solution applicable to OCR and document analysis tasks.
PL
Artykuł przedstawia podejście oparte na głębokim uczeniu się do segmentacji tekstu na obrazach, wykorzystując połączenie Sieci W pełni splotowej (FCN) i Sieci Neuronowej Rekurencyjnej (RNN). Algorytm osiąga wysoką dokładność w identyfikacji i separacji obszarów tekstu od obszarów bez tekstu, sprawdzając się dobrze z różnymi stylami tekstu, czcionkami, tłami i różnymi językami. Przewyższa metody najnowszej generacji i okazuje się być solidnym i wszechstronnym rozwiązaniem zastosowalnym w zadaniach OCR i analizie dokumentów.
EN
Underwater imagery (UI) is an important and sometimes the only tool for mapping hard-bottom habitats. With the development of new camera systems, from hand-held or simple “drop-down” cameras to ROV/AUV-mounted video systems, video data collection has increased considerably. However, the processing and analysing of vast amounts of imagery can become very labour-intensive, thus making it ineffective both time-wise and financially. This task could be simplified if the processes or their intermediate steps could be done automatically. Luckily, the rise of AI applications for automatic image analysis tasks in the last decade has empowered researchers with robust and effective tools. In this study, two ways to make UI analysis more efficient were tested with eight dominant visual features of the Southeastern Baltic reefs: 1) the simplification of video processing and expert annotation efforts by skipping the video mosaicking step and reducing the number of frames analysed; 2) the application of semantic segmentation of UI using deep learning models. The results showed that the annotation of individual frames provides similar results compared to 2D mosaics; moreover, the reduction of frames by 2–3 times resulted in only minor differences from the baseline. Semantic segmentation using the PSPNet model as the deep learning architecture was extensively evaluated, applying three variants of validation. The accuracy of segmentation, as measured by the intersection-over-union, was mediocre; however, estimates of visual coverage percentages were fair: the difference between the expert annotations and model-predicted segmentation was less than 6–8%, which could be considered an encouraging result.
EN
The amount of damage to cultural heritage sites is increasing rapidly every year. This is due to inadequate heritage management and uncontrolled urban growth as well as unpredictable seismic and atmospheric events that manifest themselves in a continuously deteriorating ecosystem. Thus, applications of artificial intelligence (AI) in remote-sensing (RS) techniques (machine-learning and deep-learning algorithms) for monitoring archaeological sites have increased in recent years. This research involves the surrounding area of the archaeological site of Chan Chan in Peru in particular. An approach that is based on the use of AI algorithms for building footprint segmentation and changedetection analysis by means of RS images is proposed. It involves a UNet convolutional network based on an EfficientNet B0 to B7 encoder. The network was trained on two public data sets from SpaceNet that were based on WV2 and WV3 satellite images: SpaceNet V1 (Rio), and SpaceNet V2 (Shanghai). In the pre-processing phase, the images from the two data sets have been equalized in order to improve their quality and avoid overfitting. The building segmentation has been performed on HRV images of the study area that were downloaded from Google Earth Pro. The value that was achieved in the IoU metric was around 70% in both experiments. The purpose of this proposed methodology is to assist scientists in drafting monitoring and conservation protocols based on already-recorded data in order to prevent future disasters and hazards.
EN
Urbanization has sparked an increase in the construction of multi-use highrise buildings which consists of commercial parcels on their lower floors and residential parcels on their higher floors. In contrast to conventional landed houses, the residents of high-rise buildings share common facilities and private parcels or spaces also differ according to ownership or use. The management and maintenance of these spaces are dependent on the ownership of the parcel where each ownership adheres to different rights, restrictions, and responsibilities (RRRs). Therefore, accurate representation and identification of those parcels affected by maintenance or renovation is crucial for assisting management bodies to improve the quality of life within a multi-use high-rise building. This study attempts to implement a temporal maintenance management for highrise building parcels within a 3D spatial database. A 3D space segmentation was done to analyze the ownership and use of space in a high-rise building. Spatial queries were also performed based on the temporal maintenance of the parcels; in addition, 3D spatial relationships were used to determine adjacent parcels that were affected by the maintenance. Thus, the implementation of temporal strata database management with an accurate 3D representation of the space can provide management bodies with concise and comprehensive information on parcels with respect to ownerships and uses.
EN
Recently Object detection and tracking using fusion of LiDAR and RGB camera for the autonomous vehicle environment is a challenging task. The existing works initiates several object detection and tracking frameworks using Artificial Intelligence (AI) algorithms. However, they were limited with high false positives and computation time issues thus lacking the performance of autonomous driving environment. The existing issues are resolved by proposing Hybrid Deep Learning based Multi Object Detection and Tracking (HDL-MODT) using sensor fusion methods. The proposed work performs fusion of solid state LiDAR, Pseudo LiDAR, and RGB camera for improving detection and tracking quality. At first, the multi-stage preprocessing is done in which noise removal is performed using Adaptive Fuzzy Filter (A-Fuzzy). The pre-processed fused image is then provided for instance segmentation to reduce the classification and tracking complexity. For that, the proposed work adopts Lightweight General Adversarial Networks (LGAN). The segmented image is provided for object detection and tracking using HDL. For reducing the complexity, the proposed work utilized VGG-16 for feature extraction which forms the feature vectors. The features vectors are then provided for object detection using YOLOv4. Finally, the detected objects were tracked using Improved Unscented Kalman Filter (IUKF) and mapping the vehicles using time based mapping by considering their RFID, velocity, location, dimension and unique ID. The simulation of the proposed work is carried out using MATLAB R2020a simulation tool and performance of the proposed work is compared with several metrics that show that the proposed work outperforms than the existing works.
EN
Objectives: This study aims to develop an advanced and efficient deep learning-based approach for the detection and segmentation of cell nuclei in microscopic images. By exploiting the U-Net architecture, this research helps to overcome the limitations of traditionally followed computational methods, enhancing the precision and scalability of biomedical image analysis. Methods: This research utilizes a deep learning model based on the U-Net architecture and is trained and evaluated for cell nuclei segmentation. The model was optimized by fine-tuning parameters, i.e., applying data augmentation techniques and employing performance metrics such as Intersection over Union (IoU) for evaluation. Comparisons were made with traditional segmentation techniques to assess improvements in accuracy, efficiency, and robustness. Results: This U-Net model demonstrated superior performance in segmenting cell nuclei compared to conventional methods. The results showed increased segmentation accuracy, lowering manual efforts, and enhanced reproducibility across different imaging datasets. The model's high IoU values confirmed its effectiveness in accurately identifying cell nuclei boundaries, making it a reliable tool for automated biomedical image analysis. Conclusions: The study highlights the effectiveness of the U-Net architecture in automated cell nuclei detection and segmentation, addressing challenges associated with manual analysis. Its scalability and adaptability extend its applicability beyond cell nuclei segmentation to other biomedical imaging tasks, offering significant potential for disease diagnosis, therapeutic development, and clinical decision-making. The findings reinforce the transformative impact of deep learning in biomedical research and healthcare applications.
EN
Cardiovascular diseases, especially myocardial infarction and heart failure, are among the most common causes of death. Proper, timely diagnosis can be a key factor in reducing the mortality of these diseases. In the present paper, statistical data analysis of left ventricle of human heart is presented. Raster DICOM images are processed, segmented and registered, in order to mark the left ventricle on medical images, and then to obtain its geometrical 3D models of constant topology. Registered, geometrical data, obtained for whole cardiac cycle of patients with healthy hearts, hypertrophy and heart failure, is then decomposed using Principal Component Analysis. The obtained modes represent the movement of the ventricle during one heart cycle. The proposed approach allows neglecting unimportant, noisy signal and enables the interpretation of the heart cycle. It is shown that modal decomposition might be used to distinguish the hearts with heart failure and the group containing healthy hearts and the ones with hypertrophy. Being a non-invasive method, this approach enables the diagnosis of various hearts, including prenatal ones.
PL
Cel: Segmentacja studentów ze względu na wartości predyktorów wyboru kierunku studiów, określenie ważności tych predyktorów i wskazanie jej konsekwencji dla marketingu uczelni. Projekt badania/metodyka badawcza/koncepcja: Badania zrealizowane z wykorzystaniem autorskiego kwestionariusza na próbie 240 studentów Uniwersytetu Ekonomicznego w Poznaniu na kierunkach studiów jakość i rozwój produktu (JiRP) oraz zarządzanie i inżynieria produkcji (ZIP). Do opracowania rankingu predyktorów oraz charakterystyki uzyskanych segmentów zastosowano drzewa klasyfikacyjne i algorytm CART. Wyniki/wnioski: Opracowano model segmentacji studentów ze względu na predyktory związane z kryteriami wyboru kierunków studiów. Najważniejszymi predyktorami okazały się: (1) nazwa kierunku studiów, (2) możliwość uzyskania tytułu zawodowego inżyniera oraz (3) źródła informacji o przyszłym kierunku studiów. Ograniczenia: Niewielka liczebność próby (240 studentów) i uwzględnienie tylko 2 kierunków studiów. Zastosowanie praktyczne: Dostarczenie rekomendacji istotnych dla efektywnych działań marketingowych uczelni. Na kierunku JiRP decydującym predyktorem jest możliwość uzyskania tytułu zawodowego inżyniera, podczas gdy na kierunku ZIP takim predyktorem jest nazwa kierunku studiów. Oryginalność/wartość poznawcza: Zastosowanie drzew klasyfikacyjnych w badanym obszarze. Uzyskana segmentacja studentów, utworzenie rankingu predyktorów wyboru i wskazanie marketingowych implikacji wyników tych analiz.
EN
Purpose: Segmentation of students according to the values of predictors of choosing a field of study, determining the importance of these predictors and indicating its consequences for university marketing. Design/methodology/approach: The research was carried out using an original questionnaire on a sample of 240 students of the Poznań University of Economics in the fields of product quality and development (JiRP) and production management and engineering (ZIP). Classification trees and the CART algorithm were used to develop the ranking of predictors and the characteristics of the obtained segments. Findings/conclusions: A model for classifying students according to predictors related to the criteria for choosing a field of study was built. The most important predictors turned out to be: (1) the name of the field of study, (2) the possibility of obtaining a professional title of engineer and (3) sources of information about the future field of study. Research limitations: Small sample size (240 students) and only 2 fields of study included. Practical implications: Providing recommendations important for effective university marketing activities. In the JiRP field, the decisive predictor is the possibility of obtaining a professional engineering title, while in the ZIP field, the name of the field of study is such a predictor. Originality/value: Application of classification trees in the study area. Obtained student segmentation, ranking of choice predictors and indication of marketing implications of the results of these analyses.
EN
In the field of medicine there is a need for the automatic detection of retinal disorders. Blindness in older persons is primarily caused by Central Retinal Vein Occlusion (CRVO). It results in rapid, irreversible eyesight loss, therefore, it is essential to identify and address CRVO as soon as feasible. Hemorrhages, which can differ in size, pigment, and shape from dot-shaped to flame hemorrhages, are one of the earliest symptoms of CRVO. The early signs of CRVO are, hemorrhages, however, so mild that ophthalmologists must dynamically observe such indicators in the retina image known as the fundus image, which is a challenging and time-consuming task. It is also difficult to segment hemorrhages since the blood vessels and hemorrhages (HE) have the same color properties also there is no particular shape for hemorrhages and it scatters all over the fundus image. A challenging study is needed to extract the characteristics of vein deformability and dilatation. Furthermore, the quality of the captured image affects the efficacy of feature Identification analysis. In this paper, a deep learning approach for CRVO extraction is proposed.
EN
Breast cancer causes a huge number of women’s deaths every year. The accurate localization of a breast lesion is a crucial stage. The segmentation of breast ultrasound images participates in the improvement of the process of detection of breast anomalies. An automatic approach of segmentation of breast ultrasound images is presented in this paper, the proposed model is a modified u-net called Attention Residual U-net, designed to help radiologists in their clinical examination to determine adequately the limitation of breast tumors. Attention Residual U-net is a combination of existing models (Convolutional Neural Network U-net, the Attention Gate Mechanism and the Residual Neural Network). Public breast ultrasound images dataset of Baheya hospital in Egypt is used in this work. Dice coefficient, Jaccard index and Accuracy are used to evaluate the performance of the proposed model on the test set. Attention residual u-net can significantly give a dice coefficient = 90%, Jaccard index = 76% and Accuracy = 90%. The proposed model is compared with two other breast segmentation methods on the same dataset. The results show that the modified U-net model was able to achieve accurate segmentation of breast lesions in breast ultrasound images.
PL
Każdego roku rak piersi powoduje ogromną liczbę zgonów kobiet. Dokładna lokalizacja zmiany piersi jest kluczowym etapem. Segmentacja obrazów ultrasonograficznych piersi przyczynia się do poprawy procesu wykrywania nieprawidłowości piersi. W tym artykule przedstawiono automatyczne podejście do segmentacji obrazów ultrasonograficznych piersi, proponowany model to zmodyfikowany U-net, nazwany Attention Residual U-net, zaprojektowany w celu wspomagania radiologów podczas badania klinicznego, w celu odpowiedniego określenia zasięgu guzów piersiowych. Attention Residual U-net jest połączeniem istniejących modeli (konwolucyjną siecią neuronową U-net, Attention Gate Mechanism i Residual Neural Network). W tym badaniu wykorzystano publiczny zbiór danych obrazów ultrasonograficznych piersi szpitala Baheya w Egipcie. Do oceny wydajności zaproponowanego modelu na zbiorze testowym wykorzystano współczynnik Dice'a, indeks Jaccarda i dokładność. Attention Residual U-net może znacznie przyczynić się do uzyskania współczynnika Dice'a równego 90%, indeksu Jaccarda równego 76% i dokładności równiej 90%. Proponowany model został porównany z dwoma innymi metodami segmentacji piersi na tym samym zbiorze danych. Wyniki pokazują, że zmodyfikowany model U-net był w stanie osiągnąć dokładną segmentację zmian piersiowych na obrazach ultrasonograficznych piersi.
EN
The digital revolution is changing every aspect of life by simulating the ways humansthink, learn and make decisions. Dentistry is one of the major fields where subsets ofartificial intelligence are extensively used for disease predictions. Periodontitis, the mostprevalent oral disease, is the main focus of this study. We propose methods for classifyingand segmenting periodontal cysts on dental radiographs using CNN, VGG16, and U-Net.Accuracy of 77.78% is obtained using CNN, and enhanced accuracy of 98.48% is obtainedthrough transfer learning with VGG16. The U-Net model also gives encouraging results.This study presents promising results, and in the future, the work can be extended withother pre-trained models and compared. Researchers working in this field can develop novelmethods and approaches to support dental practitioners and periodontists in decision-making and diagnosis and use artificial intelligence to bridge the gap between humansand machines.
EN
Researchers address the generalization problem of deep image processing networks mainly through extensive use of data augmentation techniques such as random flips, rotations, and deformations. A data augmentation technique called mixup, which constructs virtual training samples from convex combinations of inputs, was recently proposed for deep classification networks. The algorithm contributed to increased performance on classification in a variety of datasets, but so far has not been evaluated for image segmentation tasks. In this paper, we tested whether the mixup algorithm can improve the generalization performance of deep segmentation networks for medical image data. We trained a standard U-net architecture to segment the prostate in 100 T2-weighted 3D magnetic resonance images from prostate cancer patients, and compared the results with and without mixup in terms of Dice similarity coefficient and mean surface distance from a reference segmentation made by an experienced radiologist. Our results suggest that mixup offers a statistically significant boost in performance compared to non-mixup training, leading to up to 1.9% increase in Dice and a 10.9% decrease in surface distance. The mixup algorithm may thus offer an important aid for medical image segmentation applications, which are typically limited by severe data scarcity.
EN
The paper examines the features of segmentation of the upper respiratory tract to determine nasal air conduction. 2D and 3D illustrations of the segmentation process and the obtained results are given. When forming an analytical model of the aerodynamics of the nasal cavity, the main indicator that characterizes the configuration of the nasal canal is the equivalent diameter, which is determined at each intersection of the nasal cavity. It is calculated based on the area and perimeter of the corresponding section of the nasal canal. When segmenting the nasal cavity, it is first necessary to eliminate air structures that do not affect the aerodynamics of the upper respiratory tract - these are, first of all, intact spaces of the paranasal sinuses, in which diffuse air exchange prevails. In the automatic mode, this is possible by performing the elimination of unconnected isolated areas and finding the difference coefficients of the areas connected by confluences with the nasal canal in the next step. High coefficients of difference of sections between intersections will indicate the presence of separated areas and contribute to their elimination. The complex configuration and high individual variability of the structures of the nasal cavity does not allow segmentation to be fully automated, but this approach contributes to the absence of interactive correction in 80% of tomographic datasets. The proposed method, which takes into account the intensity of the image elements close to the contour ones, allows to reduce the averaging error from tomographic reconstruction up to 2 times due to artificial sub-resolution. The perspective of the work is the development of methods for fully automatic segmentation of the structures of the nasal cavity, taking into account the individual anatomical variability of the upper respiratory tract.
PL
W pracy przeanalizowano cechy segmentacji górnych dróg oddechowych w celu określenia powietrznego przewodnictwa nosowego. Przedstawiono zdjęcia 2D i 3D procesu segmentacji oraz uzyskanych wyników. Podczas formowania analitycznego modelu aerodynamiki jamy nosowej głównym wskaźnikiem charakteryzującym konfigurację kanału nosowego jest ekwiwalentna średnica, którą wyznacza się na każdym skrzyżowaniu jam nosowych. Jest ona obliczana na podstawie pola powierzchni i obwodu odpowiedniego odcinka kanału nosowego. Podczas segmentacji jamy nosowej w pierwszej kolejności należy wyeliminować struktury powietrzne, które nie wpływają na aerodynamikę górnych dróg oddechowych – są to przede wszystkim nienaruszone przestrzenie zatok przynosowych, w których dominuje rozproszona wymiana powietrza. W trybie automatycznym jest to możliwe dzięki eliminacji niepołączonych izolowanych obszarów i znalezieniu, w kolejnym kroku, współczynników różnicy obszarów połączonych konfluencjami z przewodem nosowym. Wysokie współczynniki różnic przekrojów pomiędzy skrzyżowaniami będą wskazywały na obecność wydzielonych obszarów i przyczynią się do ich eliminacji. Złożona konfiguracja i duża zmienność osobnicza struktur jamy nosowej nie pozwala na pełną automatyzację segmentacji, jednak takie podejście przyczynia się do braku konieczności interaktywnej korekcji w 80% zestawów danych tomograficznych. Zaproponowana metoda, uwzględniająca intensywność elementów obrazu znajdujących się blisko konturu, pozwala na nawet 2-krotne zmniejszenie błędu uśredniania z rekonstrukcji tomograficznej, wynikającego ze sztucznej subrozdzielczości. Perspektywą pracy jest opracowanie metod w pełni automatycznej segmentacji struktur jamy nosowej z uwzględnieniem indywidualnej zmienności anatomicznej górnych dróg oddechowych.
EN
Lung cancer is one of the leading causes of cancer-related deaths among individuals.It should be diagnosed at the early stages, otherwise it may lead to fatality due to itsmalicious nature. Early detection of the disease is very significant for patients’ survival, andit is a challenging issue. Therefore, a new model including the following stages: (1) imagepre-processing, (2) segmentation, (3) proposed feature extraction and (4) classificationis proposed. Initially, pre-processing takes place, where the input image undergoes specificpre-processing. The pre-processed images are then subjected to segmentation, which iscarried out using the Otsu thresholding model. The third phase is feature extraction, wherethe major contribution is obtained. Specifically, 4D global local binary pattern (LBP)features are extracted. After their extracting, the features are subjected to classification,where the optimized convolutional neural network (CNN) model is exploited. For a moreprecise detection of a lung nodule, the filter size of a convolution layer, hidden unit inthe fully connected layer and the activation function in CNN are tuned optimally byan improved whale optimization algorithm (WOA) called the whale with tri-level enhancedencircling behavior (WTEEB) model.
EN
Objectives: Intervertebral disc segmentation is one of the methods to diagnose spinal disease through the degener ation in asymptomatic and symptomatic patients. Even though numerous intervertebral disc segmentation tech niques are available, classifying the grades in the inter vertebral disc is a hectic challenge in the existing disc segmentation methods. Thus, an effective Whale Spine Generative Adversarial Network (WSpine-GAN) method is proposed to segment the intervertebral disc for effective grade classification. Methods: The proposed WSpine-GAN method effectively performs the disc segmentation, wherein the weights of Spine-GAN are optimally tuned using Whale Optimization Algorithm (WOA). Then, the refined disc features, such as pixel-based features and the connectivity features are extracted. Finally, the K-Nearest Neighbor (KNN) classifier based on the pfirrmann’s grading system performs the grade classification. Results: The implementation of the grade classification strategy based on the proposed WSpine-GAN and KNN is performed using the real-time database, and the perfor mance based on the metrics yielded the accuracy, true positive rate (TPR), and false positive rate (FPR) values of 97.778, 97.83, and 0.586% for the training percentage and 92.382, 90.580, and 1.972% for the K-fold value. Conclusions: The proposed WSpine-GAN method effec tively performs the disc segmentation by integrating the Spine-GANmethod and WOA. Here, the spinal cord images are segmented using the proposed WSpine-GAN method by tuning the weights optimally to enhance the performance of the disc segmentation.
EN
In the ceramic industry, quality control is performed using visual inspection in three different product stages: green, biscuit, and the final ceramic tile. To develop a real-time computer visual inspection system, the necessary step is successful tile segmentation from its background. In this paper, a new statistical multi-line signal change detection (MLSCD) segmentation method based on signal change detection (SCD) method is presented. Through experimental results on seven different ceramic tile image sets, MLSCD performance is analyzed and compared with the SCD method. Finally, recommended parameters are proposed for optimal performance of the MLSCD method.
first rewind previous Strona / 9 next fast forward last
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.