Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników

Znaleziono wyników: 6

Liczba wyników na stronie
first rewind previous Strona / 1 next fast forward last
Wyniki wyszukiwania
Wyszukiwano:
w słowach kluczowych:  semantic segmentation
help Sortuj według:

help Ogranicz wyniki do:
first rewind previous Strona / 1 next fast forward last
EN
Mushrooms are a rich source of antioxidants and nutritional values. Edible mushrooms, however, are susceptible to various diseases such as dry bubble, wet bubble, cobweb, bacterial blotches, and mites. Farmers face significant production losses due to these diseases affecting mushrooms. The manual detection of these diseases relies on expertise, knowledge of diseases, and human effort. Therefore, there is a need for computer-aided methods, which serve as optimal substitutes for detecting and segmenting diseases. In this paper, we propose a semantic segmentation approach based on the Random Forest machine learning technique for the detection and segmentation of mushroom diseases. Our focus lies in extracting a combination of different features, including Gabor, Bouda, Kayyali, Gaussian, Canny edge, Roberts, Sobel, Scharr, Prewitt, Median, and Variance. We employ constant mean-variance thresholding and the Pearson correlation coefficient to extract significant features, aiming to enhance computational speed and reduce complexity in training the Random Forest classifier. Our results indicate that semantic segmentation based on Random Forest outperforms other methods such as Support Vector Machine (SVM), Naïve Bayes, K-means, and Region of Interest in terms of accuracy. Additionally, it exhibits superior precision, recall, and F1 score compared to SVM. It is worth noting that deep learning-based semantic segmentation methods were not considered due to the limited availability of diseased mushroom images.
EN
The liver is a vital organ of the human body and hepatic cancer is one of the major causes of cancer deaths. Early and rapid diagnosis can reduce the mortality rate. It can be achieved through computerized cancer diagnosis and surgery planning systems. Segmentation plays a major role in these systems. This work evaluated the efficacy of the SegNet model in liver and particle swarm optimization-based clustering technique in liver lesion segmentation. Over 2400 CT images were used for training the deep learning network and ten CT datasets for validating the algorithm. The segmentation results were satisfactory. The values for Dice Coefficient and volumetric overlap error achieved were 0.940 ± 0.022 and 0.112 ± 0.038, respectively for liver and the results for lesion delineation were 0.4629 ± 0.287 and 0.6986 ± 0.203, respectively. The proposed method is effective for liver segmentation. However, lesion segmentation needs to be further improved for better accuracy.
EN
The paper is focused on automatic segmentation task of bone structures out of CT data series of pelvic region. The authors trained and compared four different models of deep neural networks (FCN, PSPNet, U-net and Segnet) to perform the segmentation task of three following classes: background, patient outline and bones. The mean and class-wise Intersection over Union (IoU), Dice coefficient and pixel accuracy measures were evaluated for each network outcome. In the initial phase all of the networks were trained for 10 epochs. The most exact segmentation results were obtained with the use of U-net model, with mean IoU value equal to 93.2%. The results where further outperformed with the U-net model modification with ResNet50 model used as the encoder, trained by 30 epochs, which obtained following result: mIoU measure – 96.92%, “bone” class IoU – 92.87%, mDice coefficient – 98.41%, mDice coefficient for “bone” – 96.31%, mAccuracy – 99.85% and Accuracy for “bone” class – 99.92%.
EN
For brain tumour treatment plans, the diagnoses and predictions made by medical doctors and radiologists are dependent on medical imaging. Obtaining clinically meaningful information from various imaging modalities such as computerized tomography (CT), positron emission tomography (PET) and magnetic resonance (MR) scans are the core methods in software and advanced screening utilized by radiologists. In this paper, a universal and complex framework for two parts of the dose control process – tumours detection and tumours area segmentation from medical images is introduced. The framework formed the implementation of methods to detect glioma tumour from CT and PET scans. Two deep learning pre-trained models: VGG19 and VGG19-BN were investigated and utilized to fuse CT and PET examinations results. Mask R-CNN (region-based convolutional neural network) was used for tumour detection – output of the model is bounding box coordinates for each object in the image – tumour. U-Net was used to perform semantic segmentation – segment malignant cells and tumour area. Transfer learning technique was used to increase the accuracy of models while having a limited collection of the dataset. Data augmentation methods were applied to generate and increase the number of training samples. The implemented framework can be utilized for other use-cases that combine object detection and area segmentation from grayscale and RGB images, especially to shape computer-aided diagnosis (CADx) and computer-aided detection (CADe) systems in the healthcare industry to facilitate and assist doctors and medical care providers.
EN
Background: Breast cancer is a deadly disease responsible for statistical yearly global death. Identification of cancer tumors is quite tasking, as a result, concerted efforts are thus devoted. Clinicians have used ultrasounds as a diagnostic tool for breast cancer, though, poor image quality is a major limitation when segmenting breast ultrasound. To address this problem, we present a semantic segmentation method for breast ultrasound (BUS) images. Method: The BUS images were resized and then enhanced with the contrast limited adaptive histogram equalization method. Subsequently, the variant enhanced block was used to encode the preprocessed image. Finally, the concatenated convolutions produced the segmentation mask. Results: The proposed method was evaluated with two datasets. The datasets contain 264 and 830 BUS images respectively. Dice measure (DM), Jaccard measure, and Hausdroff distance were used to evaluate the methods. Results indicate that the proposed method achieves high DM with 89.73% for malignant and 89.62% for benign BUSs. Moreover, the results obtained validate the capacity of the proposed method to achieve higher DM in comparison with reported methods. Conclusion: The proposed algorithm provides a deep learning segmentation procedure that can segment tumors in BUS images effectively and efficiently.
6
EN
Segmentation of lesions from fundus images is an essential prerequisite for accurate severity assessment of diabetic retinopathy. Due to variation in morphologies, number and size of lesions, the manual grading process becomes extremely challenging and time-consuming. This necessitates the need of an automatic segmentation system that can precisely define the region of interest boundaries and assist ophthalmologists in speedy diagnosis along with diabetic retinopathy severity grading. The paper presents a modified U-Net architecture based on residual network and employs periodic shuffling with sub-pixel convolution initialized to convolution nearest neighbour resize. The proposed architecture has been trained and validated for microaneurysm and hard exudate segmentation on two publicly available datasets namely IDRiD and e-ophtha. For IDRiD dataset, the network obtains 99.88% accuracy, 99.85% sensitivity, 99.95% specificity and dice score of 0.9998 for both microaneurysm and exudate segmentation. Further, when trained on e-ophtha and validated on IDRiD dataset, the network shows 99.98% accuracy, 99.88% sensitivity, 99.89% specificity and dice score of 0.9998 for microaneurysm segmentation. For exudates segmen-tation, the model obtained 99.98% accuracy, 99.88% sensitivity, 99.89% specificity and dice score of 0.9999, when trained on e-ophtha and validated on IDRiD dataset. In comparison to existing literature, the proposed model provides state-of-the-art results for retinal lesion segmentation.
first rewind previous Strona / 1 next fast forward last
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.