Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników

Znaleziono wyników: 4

Liczba wyników na stronie
first rewind previous Strona / 1 next fast forward last
Wyniki wyszukiwania
Wyszukiwano:
w słowach kluczowych:  medical image classification
help Sortuj według:

help Ogranicz wyniki do:
first rewind previous Strona / 1 next fast forward last
EN
The use of deep learning techniques for early and accurate medical image diagnosis has grown significantly in recent years, with some encouraging results across many medical specialties, pathologies, and image types. One of the most popular deep neural network architectures is the convolutional neural network (CNN), widely used for medical image classification and segmentation, among other tasks. One of the configuration parameters of a CNN is called stride and it regulates how sparsely the image is sampled during the convolutional process. This paper explores the idea of applying a patterned stride strategy: pixels closer to the center are processed with a smaller stride concentrating the amount of information sampled, and pixels away from the center are processed with larger strides consequently making those areas to be sampled more sparsely. We apply this method to different medical image classification tasks and demonstrate experimentally how the proposed patterned stride mechanism outperforms a baseline solution with the same computational cost (processing and memory). We also discuss the relevance and potential future extensions of the proposed method.
EN
The pulmonary nodules’ malignancy rating is commonly confined in patient follow-up; examining the nodule’s activity is estimated with the Positron Emission Tomography (PET) system or biopsy. However, these strategies are usually after the initial detection of the malignant nodules acquired from the Computed Tomography (CT) scan. In this study, a Deep Learning methodology to address the challenge of the automatic characterisation of Solitary Pulmonary Nodules (SPN) detected in CT scans is proposed. The research methodology is based on Convolutional Neural Networks, which have proven to be excellent automatic feature extractors for medical images. The publicly available CT dataset, called Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI), and a small CT scan dataset derived from a PET/CT system, is considered the classification target. New, realistic nodule representations are generated employing Deep Convolutional Generative Adversarial Networks to circumvent the shortage of large-scale data to train robust CNNs. Besides, a hierarchical CNN called Feature Fusion VGG19 (FF-VGG19) was developed to enhance feature extraction of the CNN proposed by the Visual Geometry Group (VGG). Moreover, the generated nodule images are separated into two classes by utilising a semi-supervised approach, called self-training, to tackle weak labelling due to DC-GAN inefficiencies. The DC-GAN can generate realistic SPNs, as the experts could only distinguish 23% of the synthetic nodule images. As a result, the classification accuracy of FF-VGG19 on the LIDCIDRI dataset increases by +7%, reaching 92.07%, while the classification accuracy on the CT dataset is increased by 5%, reaching 84,3%.
EN
Gliomas are the most common type of primary brain tumors in adults and their early detection is of great importance. In this paper, a method based on convolutional neural networks (CNNs) and genetic algorithm (GA) is proposed in order to noninvasively classify different grades of Glioma using magnetic resonance imaging (MRI). In the proposed method, the architecture (structure) of the CNN is evolved using GA, unlike existing methods of selecting a deep neural network architecture which are usually based on trial and error or by adopting predefined common structures. Furthermore, to decrease the variance of prediction error, bagging as an ensemble algorithm is utilized on the best model evolved by the GA. To briefly mention the results, in one case study, 90.9 percent accuracy for classifying three Glioma grades was obtained. In another case study, Glioma, Meningioma, and Pituitary tumor types were classified with 94.2 percent accuracy. The results reveal the effectiveness of the proposed method in classifying brain tumor via MRI images. Due to the flexible nature of the method, it can be readily used in practice for assisting the doctor to diagnose brain tumors in an early stage.
4
Content available Brain atrophy progress detection in MR images
EN
Alzheimer's, Parkinson's and other dementive diseases currently pose an important social problem. High brain atrophy level is one of the most important symptoms of these disorders, but it also may result from normal ageing processes. The purpose of the presented research is to design methods that support detection of dementia symptoms in radiological images. The proposed framework consists of image registration procedure, brain extraction and tissue segmentation and the exact analysis of image series (fractal and volumetric properties).
first rewind previous Strona / 1 next fast forward last
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.