Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników

Znaleziono wyników: 2

Liczba wyników na stronie
first rewind previous Strona / 1 next fast forward last
Wyniki wyszukiwania
Wyszukiwano:
w słowach kluczowych:  histopathological image
help Sortuj według:

help Ogranicz wyniki do:
first rewind previous Strona / 1 next fast forward last
EN
A crucial element in the diagnosis of breast cancer is the utilization of a classification method that is efficient, lightweight, and precise. Convolutional neural networks (CNNs) have garnered attention as a viable approach for classifying histopathological images. However, deeper and wider models tend to rely on first-order statistics, demanding substantial computational resources and struggling with fixed kernel dimensions that limit encompassing diverse resolution data, thereby degrading the model’s performance during testing. This study introduces BCHI-CovNet, a novel lightweight artificial intelligence (AI) model for histopathological breast image classification. Firstly, a novel multiscale depth-wise separable convolution is proposed. It is introduced to split input tensors into distinct tensor fragments, each subject to unique kernel sizes integrating various kernel sizes within one depth-wise convolution to capture both low- and high-resolution patterns. Secondly, an additional pooling module is introduced to capture extensive second-order statistical information across the channels and spatial dimensions. This module works in tandem with an innovative multi-head self-attention mechanism to capture the long-range pixels contributing significantly to the learning process, yielding distinctive and discriminative features that further enrich representation and introduce pixel diversity during training. These novel designs substantially reduce computational complexities regarding model parameters and FLOPs, which is crucial for resource-constrained medical devices. The outcomes achieved by employing the suggested model on two openly accessible datasets for breast cancer histopathological images reveal noteworthy performance. Specifically, the proposed approach attains high levels of accuracy: 99.15 % at 40× magnification, 99.08 % at 100× magnification, 99.22 % at 200× magnification, and 98.87 % at 400× magnification on the BreaKHis dataset. Additionally, it achieves an accuracy of 99.38 % on the BACH dataset. These results highlight the exceptional effectiveness and practical promise of BCHI-CovNet for the classification of breast cancer histopathological images.
EN
Breast cancer has high incidence rate compared to the other cancers among women. This disease leads to die if it does not diagnosis early. Fortunately, by means of modern imaging procedure such as MRI, mammography, thermography, etc., and computer systems, it is possible to diagnose all kind of breast cancers in a short time. One type of BC images is histology images. They are obtained from the entire cut-off texture by use of digital cameras and contain invaluable information to diagnose malignant and benign lesions. Recently by requesting to use the digital workflow in surgical pathology, the diagnosis based on whole slide microscopy image analysis has attracted the attention of many researchers in medical image processing. Computer aided diagnosis (CAD) systems are developed to help pathologist make a better decision. There are some weaknesses in histology images based CAD systems in compared with radiology images based CAD systems. As these images are collected in different laboratory stages and from different samples, they have different distributions leading to mismatch of training (source) domain and test (target) domain. On the other hand, there is the great similarity between images of benign tumors with those of malignant. So if these images are analyzed undiscriminating, this leads to decrease classifier performance and recognition rate. In this research, a new representation learning-based unsupervised domain adaptation method is proposed to overcome these problems. This method attempts to distinguish benign extracted feature vectors from those of malignant ones by learning a domain invariant space as much as possible. This method achieved the average classification rate of 88.5% on BreaKHis dataset and increased 5.1% classification rate compared with basic methods and 1.25% with state-of-art methods.
first rewind previous Strona / 1 next fast forward last
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.