Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników

Znaleziono wyników: 10

Liczba wyników na stronie
first rewind previous Strona / 1 next fast forward last
Wyniki wyszukiwania
Wyszukiwano:
w słowach kluczowych:  medical image analysis
help Sortuj według:

help Ogranicz wyniki do:
first rewind previous Strona / 1 next fast forward last
EN
With the onset of the COVID-19 pandemic, the automated diagnosis has become one of the most trending topics of research for faster mass screening. Deep learning-based approaches have been established as the most promising methods in this regard. However, the limitation of the labeled data is the main bottleneck of the data-hungry deep learning methods. In this paper, a two-stage deep CNN based scheme is proposed to detect COVID-19 from chest X-ray images for achieving optimum performance with limited training images. In the first stage, an encoder-decoder based autoencoder network is proposed, trained on chest X-ray images in an unsupervised manner, and the network learns to reconstruct the X-ray images. An encoder-merging network is proposed for the second stage that consists of different layers of the encoder model followed by a merging network. Here the encoder model is initialized with the weights learned on the first stage and the outputs from different layers of the encoder model are used effectively by being connected to a proposed merging network. An intelligent feature merging scheme is introduced in the proposed merging network. Finally, the encoder-merging network is trained for feature extraction of the X-ray images in a supervised manner and resulting features are used in the classification layers of the proposed architecture. Considering the final classification task, an EfficientNet-B4 network is utilized in both stages. An end to end training is performed for datasets containing classes: COVID-19, Normal, Bacterial Pneumonia, Viral Pneumonia. The proposed method offers very satisfactory performances compared to the state of the art methods and achieves an accuracy of 90:13% on the 4-class, 96:45% on a 3-class, and 99:39% on 2-class classification.
EN
Natural phenomena and mechanisms have always intrigued humans, inspiring the design of effective solutions for real-world problems. Indeed, fascinating processes occur in nature, giving rise to an ever-increasing scientific interest. In everyday life, the amount of heterogeneous biomedical data is increasing more and more thanks to the advances in image acquisition modalities and high-throughput technologies. The automated analysis of these large-scale datasets creates new compelling challenges for data-driven and model-based computational methods. The application of intelligent algorithms, which mimic natural phenomena, is emerging as an effective paradigm for tackling complex problems, by considering the unique challenges and opportunities pertaining to biomedical images. Therefore, the principal contribution of computer science research in life sciences concerns the proper combination of diverse and heterogeneous datasets — i.e., medical imaging modalities (considering also radiomics approaches), Electronic Health Record engines, multi-omics studies, and real-time monitoring — to provide a comprehensive clinical knowledge. In this paper, the state-of-the-art of nature-inspired medical image analysis methods is surveyed, aiming at establishing a common platform for beneficial exchanges among computer scientists and clinicians. In particular, this review focuses on the main nature-inspired computational techniques applied to medical image analysis tasks, namely: physical processes, bio-inspired mathematical models, Evolutionary Computation, Swarm Intelligence, and neural computation. These frameworks, tightly coupled with Clinical Decision Support Systems, can be suitably applied to every phase of the clinical workflow. We show that the proper combination of quantitative imaging and healthcare informatics enables an in-depth understanding of molecular processes that can guide towards personalised patient care.
EN
To face the increasing demand of quality healthcare, cutting-edge automation technology is being applied in demanding areas such as medical imaging. This paper proposes a novel approach to classification problems on datasets with sparse highly localized features. It is based on the use of a saliency map in the amplification of features. Unlike previous efforts, this approach does not use any prior information about feature localization. We present an experimental study based on the Diabetic Retinopathy classification problem, in which our method has shown to achieve an over 20%-higher accuracy in solving a two-class Diabetic Retinopathy classification problem than a naive approach based solely on residual neural networks. The dataset consists of 35,120 images of various qualities, inconsistent resolutions, and aspect ratios.
4
Content available remote Fast 3D Segmentation of Hepatic Images Combining Region and Boundary Criteria
EN
A new approach to the liver segmentation from 3D images is presented and compared to the existing methods in terms of quality and speed of segmentation. The proposed technique is based on 3D deformable model (active surface) combining boundary and region information. The segmentation quality is comparable to the existing methods but the proposed technique is significantly faster. The experimental evaluation was performed on clinical datasets (both MRI and CT), representing typical as well as more challenging to segment liver shapes.
EN
Abdominal Aortic Aneurysm (AAA) is a local dilation of the Aorta that occurs between the renal and iliac arteries. Recently developed treatment involves the insertion of a endovascular prosthetic (EVAR), which has the advantage of being a minimally invasive procedure but also requires monitoring to analyze postoperative patient outcomes. The most widespread method for monitoring is computerized axial tomography (CAT) imaging, which allows 3D reconstructions and segmentations of the aorta's lumen of the patient under study. Previously published methods measure the deformation of the aorta between two studies of the same patient using image registration techniques. This paper applies neural network and statistical classifiers to build a predictor of patient survival. The features used for classification are the volume registration quality measures after each of the image registration steps. This system provides the medical team an additional decision support tool.
6
Content available remote Discriminatory Power of Co-Occurrence Features in Perfusion CT Prostate Images
EN
This paper presents an algorithm used to improve the effectiveness of early prostate cancer (PCa)detection. The necessity for using such a computational method lies in the fact that although perfusion computed tomography (p-CT) is considered a good technique for the detection of early PCa, the p-CT prostate images are very difficult to interpret manually by radiologists. We hereby propose a methodology for computational analysis of p-CT prostate images based on textural coefficients derived from co-occurrence matrices and their 21 coefficients. The selection of only a few of the considered features ensures the necessary balance between matching set of already known images and new, not yet clear cases. The proposed algorithm for automatic differentiation of the healthy area of the image from the cancerous region was tested on a set of 59 prostate images. Although the results were not entirely satisfactory (86% correct recognitions), this method may be considered as the base for the development of a better algorithm.
7
Content available remote Texture analysis in perfusion images of prostate cancer-A case study
EN
The analysis of prostate images is one of the most complex tasks in medical images interpretation. It is sometimes very difficult to detect early prostate cancer using currently available diagnostic methods. But the examination based on perfusion computed tomography (p-CT) may avoid such problems even in particularly difficult cases. However, the lack of computational methods useful in the interpretation of perfusion prostate images makes it unreliable because the diagnosis depends mainly on the doctor's individual opinion and experience. In this paper some methods of automatic analysis of prostate perfusion tomographic images are presented and discussed. Some of the presented methods are adopted from papers of other researchers, and some are elaborated by the authors. This presentation of the method and algorithms is important, but it is not the master scope of the paper. The main purpose of this study is computational (deterministic and independent) verification of the usefulness of the p-CT technique in a specific case. It shows that it is possible to find computationally attainable properties of p-CT images which allow pointing out the cancerous lesion and can be used in computer aided medical diagnosis.
8
Content available remote Perfusion computed tomography in the prostate cancer diagnosis
EN
One of the main causes of – still high – mortality among patients who suffer from the prostate cancer is the too late detection of its presence. The existing diagnostic difficulties induce to seek new, better diagnostic methods, for example specific biomarkers or advanced imaging techniques. One of the proposals with the potential to increase an early detection of prostate cancer is the perfusion computed tomography. This method has been tested for some years in the Oncology Center, Cracow. Unfortunately, the perfusion prostate images are not clear and difficult to interpret. Therefore an attempt was made to develop algorithms using the image processing and pattern recognition techniques, which – as it seems – can greatly facilitate the process of searching the correct cancer location. The results of the proposed algorithm are promising, but the test data were not fully representative, because of too few cases, including few healthy patients analyzed. Hence the need for more research on a larger group of patients is obvious. It means that the simple method for automatic verification of the proposed locations with confirmed indications made using another technique, must be created. The most reliable verification technique is a histological evaluation of postoperative specimens. However, it cannot be used in all cases, also a different plane of imaging makes additional difficulties.
PL
Jedną z głównych przyczyn wciąż wysokiej śmiertelności wśród chorych na raka prostaty jest zbyt późne wykrycie obecności tego nowotworu. Istniejące trudności diagnostyczne skłaniają do poszukiwania nowych, lepszych metod, np. specyficznych biomarkerów czy technik zaawansowanej diagnostyki obrazowej. Jedną z propozycji mających potencjał do zwiększania wykrywalności wczesnego raka prostaty jest perfuzyjna tomografia komputerowa. Metoda ta od kilku lat testowana jest w krakowskim oddziale Centrum Onkologii. Jednak perfuzyjny obraz sterczą jest mało wyrazisty i trudny w interpretacji, dlatego podjęto próbę opracowania algorytmów wykorzystujących techniki komputerowego przetwarzania i rozpoznawania obrazów, co - jak się wydaje - może wydatnie ułatwić proces poszukiwania i właściwej lokalizacji nowotworu. Zaproponowany algorytm uzyskał obiecujące wyniki na danych testowych, te jednak nie do końca były reprezentatywne, uwzględniały bowiem zbyt małą liczbę przypadków, w tym mało osób zdrowych. Stąd konieczność rozszerzenia badań na szerszą grupę pacjentów, co wiąże się z potrzebą opracowania prostej metody automatycznej weryfikacji wskazań algorytmu z potwierdzoną inną metodą lokalizacją nowotworu. Najbardziej wiarygodną metodą porównawczą jest ocena histopatologiczna preparatów pooperacyjnych. Nie może być ona jednak stosowana u wszystkich pacjentów, a odmienna płaszczyzna obrazowania nastręcza dodatkowych trudności.
9
Content available remote Picture Languages in Automatic Radiological Palm Interpretation
EN
The paper presents a new technique for cognitive analysis and recognition of pathological wrist bone lesions. This method uses AI techniques and mathematical linguistics allowing us to automatically evaluate the structure of the said bones, based on palm radiological images. Possibilities of computer interpretation of selected images, based on the methodology of automatic medical image understanding, as introduced by the authors, were created owing to the introduction of an original relational description of individual palm bones. This description was built with the use of graph linguistic formalisms already applied in artificial intelligence. The research described in this paper demonstrates that for the needs of palm bone diagnostics, specialist linguistic tools such as expansive graph grammars and EDT-label graphs are particularly well suited. Defining a graph image language adjusted to the specific features of the scientific problem described here permitted a semantic description of correct palm bone structures. It also enabled the interpretation of images showing some in-born lesions, such as additional bones or acquired lesions such as their incorrect junctions resulting from injuries and synostoses.
10
Content available remote Self-learning model-based segmentation of medical images
EN
Interaction increases flexibility of segmentation but it leads to undesirable behaviour of an algorithm if knowledge being requested is inappropriate. In region growing, this is the case for defining the homogeneity criterion, as its specification depends also on image formation properties that are not known to the user. We developed a region growing algorithm that learns its homogeneity criterion automatically from characteristics of the region to be segmented. The method is based on a model that describes homogeneity and simple shape properties of the region. Parameters of the homogeneity criterion are estimated from sample locations in the region. These locations are selected sequentially in a random walk starting at the seed point, and the homogeneity criterion is updated continuously. In contrast to other adaptive region growing methods our approach produces results that are far less sensitive to the seed point location, and it allows a segmentation of individual structures. The model-based adaptive region growing approach was extended to a fully automatic and complete segmentation method by using the pixels with the smallest gradient length in the not yet segmented image region as a seed point. Both methods were tested for segmentation on test images and of structures in CT images. The performance of the semi-automatic method is compared with the adaptive moving mean value region growing method, and the automatic method is compared with the watershed segmentation. We found our method to work reliable if the model assumption on homogeneity and region characteristics were true. Furthermore, the model is simple but robust thus allowing for a certain amount of deviation from model constraints and still delivering the expected segmentation result.
first rewind previous Strona / 1 next fast forward last
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.