Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników

Znaleziono wyników: 2

Liczba wyników na stronie
first rewind previous Strona / 1 next fast forward last
Wyniki wyszukiwania
Wyszukiwano:
w słowach kluczowych:  segmentacja prostaty
help Sortuj według:

help Ogranicz wyniki do:
first rewind previous Strona / 1 next fast forward last
EN
Prostate cancer is one of the most commonly diagnosed non-cutaneous malignant tumors and the sixth major cause of cancer-related death generally found in men globally. Automatic segmentation of prostate regions has a wide range of applications in prostate cancer diagnosis and treatment. It is challenging to extract powerful spatial features for precise prostate segmentation methods due to the wide variation in prostate size, shape, and histopathologic heterogeneity among patients. Most of the existing CNN-based architectures often produce unsatisfactory results and inaccurate boundaries in prostate segmentation, which are caused by inadequate discriminative feature maps and the limited amount of spatial information. To address these issues, we propose a novel deep learning technique called Multi-Stage FCN architecture for 2D prostate segmentation that captures more precise spatial information and accurate prostate boundaries. In addition, a new prostate ultrasound image dataset known as CCH-TRUSPS was collected from Chongqing University Cancer Hospital, including prostate ultrasound images of various prostate cancer architectures. We evaluate our method on the CCH-TRUSPS dataset and the publicly available Multi-site T2-weighted MRI dataset using five commonly used metrics for medical image analysis. When compared to other CNN-based methods on the CCH-TRUSPS test set, our Multi-Stage FCN achieves the highest and best binary accuracy of 99.15%, the DSC score of 94.90%, the IoU score of 89.80%, the precision of 94.67%, and the recall of 96.49%. The statistical and visual results demonstrate that our approach outperforms previous CNN-based techniques in all ramifications and can be used for the clinical diagnosis of prostate cancer.
EN
Prostate lesion detection in an axial T2 weighted (T2W) MR images is a very challenging task due to heterogeneous and inconsistent pixel representation surrounding the prostate boundary. In this paper, a radiomics based deeply supervised U-Net is proposed for both prostate gland and prostate lesion segmentation. The proposed pipeline is trained and validated on 1174 and 2071 T2W MR images of 40 patients and tested on 250 and 415 T2W MR images of 10 patients for prostate capsule segmentation and prostate lesion segmentation, respectively. Effective segmentation of prostate lesions in various stages of prostate cancer (namely T1, T2, T3, and T4) is achieved using the proposed framework. The mean Dice Similarity Coefficient (DSC) for actual prostate capsule segmentation and prostate lesion segmentation is 0.8958 and 0.9176, respectively. The proposed framework is also tested on Promise12 public dataset for performance analysis in segmenting prostate gland. The segmentation results using proposed architecture are promising compared to state-of-the-art techniques. It also improves the accuracy of the prostate cancer diagnosis.
first rewind previous Strona / 1 next fast forward last
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.