Nowa wersja platformy, zawierająca wyłącznie zasoby pełnotekstowe, jest już dostępna.
Przejdź na https://bibliotekanauki.pl
Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników

Znaleziono wyników: 3

Liczba wyników na stronie
first rewind previous Strona / 1 next fast forward last
Wyniki wyszukiwania
Wyszukiwano:
w słowach kluczowych:  histopathological image
help Sortuj według:

help Ogranicz wyniki do:
first rewind previous Strona / 1 next fast forward last
EN
A crucial element in the diagnosis of breast cancer is the utilization of a classification method that is efficient, lightweight, and precise. Convolutional neural networks (CNNs) have garnered attention as a viable approach for classifying histopathological images. However, deeper and wider models tend to rely on first-order statistics, demanding substantial computational resources and struggling with fixed kernel dimensions that limit encompassing diverse resolution data, thereby degrading the model’s performance during testing. This study introduces BCHI-CovNet, a novel lightweight artificial intelligence (AI) model for histopathological breast image classification. Firstly, a novel multiscale depth-wise separable convolution is proposed. It is introduced to split input tensors into distinct tensor fragments, each subject to unique kernel sizes integrating various kernel sizes within one depth-wise convolution to capture both low- and high-resolution patterns. Secondly, an additional pooling module is introduced to capture extensive second-order statistical information across the channels and spatial dimensions. This module works in tandem with an innovative multi-head self-attention mechanism to capture the long-range pixels contributing significantly to the learning process, yielding distinctive and discriminative features that further enrich representation and introduce pixel diversity during training. These novel designs substantially reduce computational complexities regarding model parameters and FLOPs, which is crucial for resource-constrained medical devices. The outcomes achieved by employing the suggested model on two openly accessible datasets for breast cancer histopathological images reveal noteworthy performance. Specifically, the proposed approach attains high levels of accuracy: 99.15 % at 40× magnification, 99.08 % at 100× magnification, 99.22 % at 200× magnification, and 98.87 % at 400× magnification on the BreaKHis dataset. Additionally, it achieves an accuracy of 99.38 % on the BACH dataset. These results highlight the exceptional effectiveness and practical promise of BCHI-CovNet for the classification of breast cancer histopathological images.
EN
Squamous cell carcinoma is the most common type of cancer that occurs in many organs of the human body. To detect carcinoma, pathologists observe tissue samples at multiple magnifications, which is time-consuming and prone to inter- or intra-observer variability. The key challenge for automation of squamous cell carcinoma diagnosis is to extract the features at low (100x) magnification and explain the decision-making process to healthcare professionals. The existing literature used either machine learning or deep learning models to detect squamous cell carcinoma of specific organs. In this work, we report on the implementation of an explainable diagnostic aid system for squamous cell carcinoma of any organ and present a comparative analysis with state-of-the-art models. A classifier with an ensemble feature selection technique is developed to provide an automatic diagnostic aid for distinguishing between squamous cell carcinoma positive and negative cases based on histopathological images. Moreover, explainable AI techniques such as ELI5, LIME and SHAP are introduced to machine learning model which provides feature interpretability of prediction made by the classifier. The results show that the machine learning model achieved an accuracy of 93.43% and 96.66% on public and multi-centric private datasets, respectively. The proposed CatBoost classifier achieved remarkable performance in diagnosing multi-organ squamous cell carcinoma from low magnification histopathological images, even when various illumination variations were introduced.
EN
Breast cancer has high incidence rate compared to the other cancers among women. This disease leads to die if it does not diagnosis early. Fortunately, by means of modern imaging procedure such as MRI, mammography, thermography, etc., and computer systems, it is possible to diagnose all kind of breast cancers in a short time. One type of BC images is histology images. They are obtained from the entire cut-off texture by use of digital cameras and contain invaluable information to diagnose malignant and benign lesions. Recently by requesting to use the digital workflow in surgical pathology, the diagnosis based on whole slide microscopy image analysis has attracted the attention of many researchers in medical image processing. Computer aided diagnosis (CAD) systems are developed to help pathologist make a better decision. There are some weaknesses in histology images based CAD systems in compared with radiology images based CAD systems. As these images are collected in different laboratory stages and from different samples, they have different distributions leading to mismatch of training (source) domain and test (target) domain. On the other hand, there is the great similarity between images of benign tumors with those of malignant. So if these images are analyzed undiscriminating, this leads to decrease classifier performance and recognition rate. In this research, a new representation learning-based unsupervised domain adaptation method is proposed to overcome these problems. This method attempts to distinguish benign extracted feature vectors from those of malignant ones by learning a domain invariant space as much as possible. This method achieved the average classification rate of 88.5% on BreaKHis dataset and increased 5.1% classification rate compared with basic methods and 1.25% with state-of-art methods.
first rewind previous Strona / 1 next fast forward last
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.