Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników

Znaleziono wyników: 5

Liczba wyników na stronie
first rewind previous Strona / 1 next fast forward last
Wyniki wyszukiwania
Wyszukiwano:
w słowach kluczowych:  histopathology image
help Sortuj według:

help Ogranicz wyniki do:
first rewind previous Strona / 1 next fast forward last
EN
Accurate nuclei segmentation is a critical step for physicians to achieve essential information about a patient’s disease through digital pathology images, enabling an effective diagnosis and evaluation of subsequent treatments. Since pathology images contain many nuclei, manual segmentation is time-consuming and error-prone. Therefore, developing a precise and automatic method for nuclei segmentation is urgent. This paper proposes a novel multi-task segmentation network that incorporates background and contour segmentation into the nuclei segmentation method and produces more accurate segmentation results. The convolution and attention modules are merged with the model to increase its global focus and enhance good segmentation results indirectly. We propose a reverse feature enhance module for contour extraction that facilitates feature integration between auxiliary tasks. The multi-feature fusion module is embedded in the final decoding branch to use different levels of features from auxiliary segmentation branches with varying concerns. We evaluate the proposed method on four challenging nuclei segmentation datasets. The proposed method achieves excellent performance on all four datasets. We found that the Dice coefficient reached 0.8563±0.0323, 0.8183±0.0383, 0.9222±0.0216, and 0.9220±0.0602 on the TNBC, MoNuSeg, KMC, and Glas. Our method produces better boundary accuracy and less sticking than other end-to-end segmentation methods. The results show that our method can perform better than other proposed state-of-the-art methods.
EN
Manual delineation of tumours in breast histopathology images is generally time-consuming and laborious. Computer-aided detection systems can assist pathologists by detecting abnormalities faster and more efficiently. Convolutional Neural Networks (CNN) and transfer learning have shown good results in breast cancer classification. Most of the existing research works employed State-of-the-art pre-trained architectures for classification. But the performance of these methods needs to be improved in the context of effective feature learning and refinement. In this work, we propose an ensemble of two CNN architectures integrated with Channel and Spatial attention. Features from the histopathology images are extracted parallelly by two powerful custom deep architectures namely, CSAResnet and DAMCNN. Finally, ensemble learning is employed for further performance improvement. The proposed framework was able to achieve a classification accuracy of 99.55% on the BreakHis dataset.
EN
Breast cancer is one of the major causes of death among women worldwide. Efficient diagnosis of breast cancer in the early phases can reduce the associated morbidity and mortality and can provide a higher probability of full recovery. Computer-aided detection systems use computer technologies to detect abnormalities in clinical images which can assist medical professionals in a faster and more accurate diagnosis. In this paper, we propose a modified residual neural network-based method for breast cancer detection using histopathology images. The proposed approach provides good performance over varying magnification factors of 40X, 100X, 200X and 400X. The network obtains an average classification accuracy of 99.75%, precision of 99.18% and recall of 99.37% on BreakHis dataset with 40X magnification factor. The proposed work outperforms the existing methods and delivers state-of-the-art results on the benchmark breast cancer dataset.
EN
For the Convolutional Neural Networks (CNNs) applied in the intelligent diagnosis of gastric cancer, existing methods mostly focus on individual characteristics or network frameworks without a policy to depict the integral information. Mainly, conditional random field (CRF), an efficient and stable algorithm for analyzing images containing complicated contents, can characterize spatial relation in images. In this paper, a novel hierarchical conditional random field (HCRF) based gastric histopathology image segmentation (GHIS) method is proposed, which can automatically localize abnormal (cancer) regions in gastric histopathology images obtained by an optical microscope to assist histopathologists in medical work. This HCRF model is built up with higher order potentials, including pixel-level and patch-level potentials, and graph-based post-processing is applied to further improve its segmentation performance. Especially, a CNN is trained to build up the pixel-level potentials and another three CNNs are fine-tuned to build up the patch-level potentials for sufficient spatial segmentation information. In the experiment, a hematoxylin and eosin (H&E) stained gastric histopathological dataset with 560 abnormal images are divided into training, validation and test sets with a ratio of 1 : 1 :2. Finally, segmentation accuracy, recall and specificity of 78.91%, 65.59%, and 81.33% are achieved on the test set. Our HCRF model demonstrates high segmentation performance and shows its effectiveness and future potential in the GHIS field.
EN
The exact measure of mitotic count is one of the crucial parameters in breast cancer grading and prognosis. Detection of mitosis in standard H & E stained histopathology images is challenging due to diffused intensities along object boundaries and shape variation in different stages of mitosis. This paper explores the feasibility of transfer learning for mitosis detection. A pre-trained Convolutional Neural Network is transformed by coupling random forest classifier with the initial fully connected layers to extract discriminant features from nuclei patches and to precisely prognosticate the class label of cell nuclei. The modified Convolutional Neural Network accurately classify the detected cell nuclei with limited training data. The designed framework accomplishes higher classification accuracy by carefully fine tuning the pre-trained model and pre-processing the extracted features. Moreover, proposed method is evaluated on MITOS dataset provided for the MITOS-ATYPIA contest 2014 and clinical data set from Regional Cancer Centre, Thiruvananthapuram, India. Significance of Convolutional Neural Network based method is justified by comparing with recently reported works including a Multi Classifier System based on Deep Belief Network. Experiments show that the pre-trained Convolutional Neural Network model outperforms conventionally used detection systems and provides at least 15% improvement in F-score on other state-of-the-art techniques.
first rewind previous Strona / 1 next fast forward last
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.