Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników

Znaleziono wyników: 2

Liczba wyników na stronie
first rewind previous Strona / 1 next fast forward last
Wyniki wyszukiwania
help Sortuj według:

help Ogranicz wyniki do:
first rewind previous Strona / 1 next fast forward last
EN
Background and objective: Retinal image quality assessment is an essential task for the diagnosis of retinal diseases. Recently, there are emerging deep models to grade quality of retinal images. However, current models either directly transfer classification networks originally designed for natural images to quality classification of retinal images or introduce extra image quality priors via multiple CNN branches or independent CNNs. The purpose of this work is to address retinal image quality assessment by a simple deep model. Methods: We propose a dark and bright channel prior guided deep network for retinal image quality assessment named GuidedNet. It introduces dark and bright channel priors into deep network without extra parameters increasing and allows for training end-to-end. In detail, the dark and bright channel priors are embedded into the start layer of a deep network to improve the discriminate ability of deep features. Moreover, we re-annotate a new retinal image quality dataset called RIQA-RFMiD for further validation. Results: The proposed method is evaluated on a public retinal image quality dataset Eye-Quality and our re-annotated dataset RIQA-RFMiD. We obtain the average F-score of 88.03% on Eye-Quality and 66.13% on RIQA-RFMiD, respectively. Conclusions: We investigate the utility of the dark and bright channel priors for retinal image quality assessment. And we propose a GuidedNet by embedding the dark and bright channel priors into CNNs without much model burden. Moreover, to valid the GuidedNet, we re-create a new dataset RIQA-RFMiD. With the GuidedNet, we achieves state-of-the-art performances on a public dataset Eye-Quality and our re-annotated dataset RIQA-RFMiD.
EN
The segmentation of liver and liver tumor is an essential step for computer-aided liver disease diagnosis, treatment and prognosis. Although deep convolutional neural networks have contributed to liver and tumor segmentation, their architectures can not maintain spatial details and long-range context information. Besides, the fixed receptive fields of these networks limit the segmentation performance of livers and tumors with variant sizes and shapes. To address above problems, we propose a deep attention neural network which contains high-resolution branch and multi-scale features aggregation for cascaded liver and tumor segmentation from CT images. To be specific, the high-resolution branch can maintain the resolution of the input image and thus preserves the spatial details. The multi-scale features exchange and fusion enable the receptive fields of the network to adapt to liver and tumor with variant shapes and sizes. The appended attention module evaluates the similarities between every two pixels to model the long-range dependence and context information so that the network can segment liver and tumor areas located in distant regions. Experimental results on the LiTS and the 3D-IRCADb datasets demonstrate that our method can generate satisfying performance.
first rewind previous Strona / 1 next fast forward last
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.