Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników

Znaleziono wyników: 4

Liczba wyników na stronie
first rewind previous Strona / 1 next fast forward last
Wyniki wyszukiwania
Wyszukiwano:
w słowach kluczowych:  segmentacja semantyczna
help Sortuj według:

help Ogranicz wyniki do:
first rewind previous Strona / 1 next fast forward last
EN
The paper is focused on automatic segmentation task of bone structures out of CT data series of pelvic region. The authors trained and compared four different models of deep neural networks (FCN, PSPNet, U-net and Segnet) to perform the segmentation task of three following classes: background, patient outline and bones. The mean and class-wise Intersection over Union (IoU), Dice coefficient and pixel accuracy measures were evaluated for each network outcome. In the initial phase all of the networks were trained for 10 epochs. The most exact segmentation results were obtained with the use of U-net model, with mean IoU value equal to 93.2%. The results where further outperformed with the U-net model modification with ResNet50 model used as the encoder, trained by 30 epochs, which obtained following result: mIoU measure – 96.92%, “bone” class IoU – 92.87%, mDice coefficient – 98.41%, mDice coefficient for “bone” – 96.31%, mAccuracy – 99.85% and Accuracy for “bone” class – 99.92%.
EN
For brain tumour treatment plans, the diagnoses and predictions made by medical doctors and radiologists are dependent on medical imaging. Obtaining clinically meaningful information from various imaging modalities such as computerized tomography (CT), positron emission tomography (PET) and magnetic resonance (MR) scans are the core methods in software and advanced screening utilized by radiologists. In this paper, a universal and complex framework for two parts of the dose control process – tumours detection and tumours area segmentation from medical images is introduced. The framework formed the implementation of methods to detect glioma tumour from CT and PET scans. Two deep learning pre-trained models: VGG19 and VGG19-BN were investigated and utilized to fuse CT and PET examinations results. Mask R-CNN (region-based convolutional neural network) was used for tumour detection – output of the model is bounding box coordinates for each object in the image – tumour. U-Net was used to perform semantic segmentation – segment malignant cells and tumour area. Transfer learning technique was used to increase the accuracy of models while having a limited collection of the dataset. Data augmentation methods were applied to generate and increase the number of training samples. The implemented framework can be utilized for other use-cases that combine object detection and area segmentation from grayscale and RGB images, especially to shape computer-aided diagnosis (CADx) and computer-aided detection (CADe) systems in the healthcare industry to facilitate and assist doctors and medical care providers.
EN
Background: Breast cancer is a deadly disease responsible for statistical yearly global death. Identification of cancer tumors is quite tasking, as a result, concerted efforts are thus devoted. Clinicians have used ultrasounds as a diagnostic tool for breast cancer, though, poor image quality is a major limitation when segmenting breast ultrasound. To address this problem, we present a semantic segmentation method for breast ultrasound (BUS) images. Method: The BUS images were resized and then enhanced with the contrast limited adaptive histogram equalization method. Subsequently, the variant enhanced block was used to encode the preprocessed image. Finally, the concatenated convolutions produced the segmentation mask. Results: The proposed method was evaluated with two datasets. The datasets contain 264 and 830 BUS images respectively. Dice measure (DM), Jaccard measure, and Hausdroff distance were used to evaluate the methods. Results indicate that the proposed method achieves high DM with 89.73% for malignant and 89.62% for benign BUSs. Moreover, the results obtained validate the capacity of the proposed method to achieve higher DM in comparison with reported methods. Conclusion: The proposed algorithm provides a deep learning segmentation procedure that can segment tumors in BUS images effectively and efficiently.
4
EN
Segmentation of lesions from fundus images is an essential prerequisite for accurate severity assessment of diabetic retinopathy. Due to variation in morphologies, number and size of lesions, the manual grading process becomes extremely challenging and time-consuming. This necessitates the need of an automatic segmentation system that can precisely define the region of interest boundaries and assist ophthalmologists in speedy diagnosis along with diabetic retinopathy severity grading. The paper presents a modified U-Net architecture based on residual network and employs periodic shuffling with sub-pixel convolution initialized to convolution nearest neighbour resize. The proposed architecture has been trained and validated for microaneurysm and hard exudate segmentation on two publicly available datasets namely IDRiD and e-ophtha. For IDRiD dataset, the network obtains 99.88% accuracy, 99.85% sensitivity, 99.95% specificity and dice score of 0.9998 for both microaneurysm and exudate segmentation. Further, when trained on e-ophtha and validated on IDRiD dataset, the network shows 99.98% accuracy, 99.88% sensitivity, 99.89% specificity and dice score of 0.9998 for microaneurysm segmentation. For exudates segmen-tation, the model obtained 99.98% accuracy, 99.88% sensitivity, 99.89% specificity and dice score of 0.9999, when trained on e-ophtha and validated on IDRiD dataset. In comparison to existing literature, the proposed model provides state-of-the-art results for retinal lesion segmentation.
first rewind previous Strona / 1 next fast forward last
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.