Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Powiadomienia systemowe
  • Sesja wygasła!

Znaleziono wyników: 7

Liczba wyników na stronie
first rewind previous Strona / 1 next fast forward last
Wyniki wyszukiwania
Wyszukiwano:
w słowach kluczowych:  obraz histopatologiczny
help Sortuj według:

help Ogranicz wyniki do:
first rewind previous Strona / 1 next fast forward last
EN
A crucial element in the diagnosis of breast cancer is the utilization of a classification method that is efficient, lightweight, and precise. Convolutional neural networks (CNNs) have garnered attention as a viable approach for classifying histopathological images. However, deeper and wider models tend to rely on first-order statistics, demanding substantial computational resources and struggling with fixed kernel dimensions that limit encompassing diverse resolution data, thereby degrading the model’s performance during testing. This study introduces BCHI-CovNet, a novel lightweight artificial intelligence (AI) model for histopathological breast image classification. Firstly, a novel multiscale depth-wise separable convolution is proposed. It is introduced to split input tensors into distinct tensor fragments, each subject to unique kernel sizes integrating various kernel sizes within one depth-wise convolution to capture both low- and high-resolution patterns. Secondly, an additional pooling module is introduced to capture extensive second-order statistical information across the channels and spatial dimensions. This module works in tandem with an innovative multi-head self-attention mechanism to capture the long-range pixels contributing significantly to the learning process, yielding distinctive and discriminative features that further enrich representation and introduce pixel diversity during training. These novel designs substantially reduce computational complexities regarding model parameters and FLOPs, which is crucial for resource-constrained medical devices. The outcomes achieved by employing the suggested model on two openly accessible datasets for breast cancer histopathological images reveal noteworthy performance. Specifically, the proposed approach attains high levels of accuracy: 99.15 % at 40× magnification, 99.08 % at 100× magnification, 99.22 % at 200× magnification, and 98.87 % at 400× magnification on the BreaKHis dataset. Additionally, it achieves an accuracy of 99.38 % on the BACH dataset. These results highlight the exceptional effectiveness and practical promise of BCHI-CovNet for the classification of breast cancer histopathological images.
EN
Accurate nuclei segmentation is a critical step for physicians to achieve essential information about a patient’s disease through digital pathology images, enabling an effective diagnosis and evaluation of subsequent treatments. Since pathology images contain many nuclei, manual segmentation is time-consuming and error-prone. Therefore, developing a precise and automatic method for nuclei segmentation is urgent. This paper proposes a novel multi-task segmentation network that incorporates background and contour segmentation into the nuclei segmentation method and produces more accurate segmentation results. The convolution and attention modules are merged with the model to increase its global focus and enhance good segmentation results indirectly. We propose a reverse feature enhance module for contour extraction that facilitates feature integration between auxiliary tasks. The multi-feature fusion module is embedded in the final decoding branch to use different levels of features from auxiliary segmentation branches with varying concerns. We evaluate the proposed method on four challenging nuclei segmentation datasets. The proposed method achieves excellent performance on all four datasets. We found that the Dice coefficient reached 0.8563±0.0323, 0.8183±0.0383, 0.9222±0.0216, and 0.9220±0.0602 on the TNBC, MoNuSeg, KMC, and Glas. Our method produces better boundary accuracy and less sticking than other end-to-end segmentation methods. The results show that our method can perform better than other proposed state-of-the-art methods.
EN
Manual delineation of tumours in breast histopathology images is generally time-consuming and laborious. Computer-aided detection systems can assist pathologists by detecting abnormalities faster and more efficiently. Convolutional Neural Networks (CNN) and transfer learning have shown good results in breast cancer classification. Most of the existing research works employed State-of-the-art pre-trained architectures for classification. But the performance of these methods needs to be improved in the context of effective feature learning and refinement. In this work, we propose an ensemble of two CNN architectures integrated with Channel and Spatial attention. Features from the histopathology images are extracted parallelly by two powerful custom deep architectures namely, CSAResnet and DAMCNN. Finally, ensemble learning is employed for further performance improvement. The proposed framework was able to achieve a classification accuracy of 99.55% on the BreakHis dataset.
EN
Breast cancer is one of the major causes of death among women worldwide. Efficient diagnosis of breast cancer in the early phases can reduce the associated morbidity and mortality and can provide a higher probability of full recovery. Computer-aided detection systems use computer technologies to detect abnormalities in clinical images which can assist medical professionals in a faster and more accurate diagnosis. In this paper, we propose a modified residual neural network-based method for breast cancer detection using histopathology images. The proposed approach provides good performance over varying magnification factors of 40X, 100X, 200X and 400X. The network obtains an average classification accuracy of 99.75%, precision of 99.18% and recall of 99.37% on BreakHis dataset with 40X magnification factor. The proposed work outperforms the existing methods and delivers state-of-the-art results on the benchmark breast cancer dataset.
EN
For the Convolutional Neural Networks (CNNs) applied in the intelligent diagnosis of gastric cancer, existing methods mostly focus on individual characteristics or network frameworks without a policy to depict the integral information. Mainly, conditional random field (CRF), an efficient and stable algorithm for analyzing images containing complicated contents, can characterize spatial relation in images. In this paper, a novel hierarchical conditional random field (HCRF) based gastric histopathology image segmentation (GHIS) method is proposed, which can automatically localize abnormal (cancer) regions in gastric histopathology images obtained by an optical microscope to assist histopathologists in medical work. This HCRF model is built up with higher order potentials, including pixel-level and patch-level potentials, and graph-based post-processing is applied to further improve its segmentation performance. Especially, a CNN is trained to build up the pixel-level potentials and another three CNNs are fine-tuned to build up the patch-level potentials for sufficient spatial segmentation information. In the experiment, a hematoxylin and eosin (H&E) stained gastric histopathological dataset with 560 abnormal images are divided into training, validation and test sets with a ratio of 1 : 1 :2. Finally, segmentation accuracy, recall and specificity of 78.91%, 65.59%, and 81.33% are achieved on the test set. Our HCRF model demonstrates high segmentation performance and shows its effectiveness and future potential in the GHIS field.
EN
The exact measure of mitotic count is one of the crucial parameters in breast cancer grading and prognosis. Detection of mitosis in standard H & E stained histopathology images is challenging due to diffused intensities along object boundaries and shape variation in different stages of mitosis. This paper explores the feasibility of transfer learning for mitosis detection. A pre-trained Convolutional Neural Network is transformed by coupling random forest classifier with the initial fully connected layers to extract discriminant features from nuclei patches and to precisely prognosticate the class label of cell nuclei. The modified Convolutional Neural Network accurately classify the detected cell nuclei with limited training data. The designed framework accomplishes higher classification accuracy by carefully fine tuning the pre-trained model and pre-processing the extracted features. Moreover, proposed method is evaluated on MITOS dataset provided for the MITOS-ATYPIA contest 2014 and clinical data set from Regional Cancer Centre, Thiruvananthapuram, India. Significance of Convolutional Neural Network based method is justified by comparing with recently reported works including a Multi Classifier System based on Deep Belief Network. Experiments show that the pre-trained Convolutional Neural Network model outperforms conventionally used detection systems and provides at least 15% improvement in F-score on other state-of-the-art techniques.
EN
Breast cancer has high incidence rate compared to the other cancers among women. This disease leads to die if it does not diagnosis early. Fortunately, by means of modern imaging procedure such as MRI, mammography, thermography, etc., and computer systems, it is possible to diagnose all kind of breast cancers in a short time. One type of BC images is histology images. They are obtained from the entire cut-off texture by use of digital cameras and contain invaluable information to diagnose malignant and benign lesions. Recently by requesting to use the digital workflow in surgical pathology, the diagnosis based on whole slide microscopy image analysis has attracted the attention of many researchers in medical image processing. Computer aided diagnosis (CAD) systems are developed to help pathologist make a better decision. There are some weaknesses in histology images based CAD systems in compared with radiology images based CAD systems. As these images are collected in different laboratory stages and from different samples, they have different distributions leading to mismatch of training (source) domain and test (target) domain. On the other hand, there is the great similarity between images of benign tumors with those of malignant. So if these images are analyzed undiscriminating, this leads to decrease classifier performance and recognition rate. In this research, a new representation learning-based unsupervised domain adaptation method is proposed to overcome these problems. This method attempts to distinguish benign extracted feature vectors from those of malignant ones by learning a domain invariant space as much as possible. This method achieved the average classification rate of 88.5% on BreaKHis dataset and increased 5.1% classification rate compared with basic methods and 1.25% with state-of-art methods.
first rewind previous Strona / 1 next fast forward last
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.