PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Tytuł artykułu

Devignetting fundus images via Bayesian estimation of illumination component and gamma correction

Identyfikatory
Warianty tytułu
Języki publikacji
EN
Abstrakty
EN
Background: Fundus photography is an imaging modality exclusively used in ophthalmology for visualizing structures like macula, retina, and optic disc. The fundus camera has only one illumination source, which is situated at its center. Hence, structures away from the center will appear darker than naturally they are. This adverse effect caused by the uneven illumination is called as ‘vignetting’. Objectives: An algorithm termed as Gamma Correction of Illumination Component (GCIC) for devignetting fundus images is proposed in this paper. Methods: Inspired by the Retinex theory, the illumination component is computed with the help of a Maximum a Posteriori (MAP) estimator. The estimated illumination component after normalization is subjected to the Gamma correction to suppress its unevenness. Results: GCIC exhibited comparatively low values of Average Gradient of the Illumination Component (AGIC), Lightness Order Error (LOE), and computational time. The proposed method gave a comparatively better performance in terms of the performance metrics, namely contrast-to-noise-ratio (CNR), peak-signal-to-noise-ratio (PSNR), structure similarity index (SSIM), and entropy. With respect to the cumulative performance, GCIC has been observed to be better than other devignetting algorithms in the literature, like Illumination Equalization model, Homomorphic Filtering, Adaptive Gamma Correction (AGC), Modified Sigmoid Transform (MST), Imran Qureshi et al. (2019), Zheng et al., Variation-based Fusion (VF) and Zhou’s et al. Conclusion: GCIC corrects the uneven background illumination without scaling or boosting it intolerably. It produces output images, which are natural in appearance, free from color artefacts, and maintaining the sharpness of the fundus features. It is computationally fast as well.
Twórcy
  • Department of Electronics and Communication Engineering, Karunya Institute of Technology and Sciences, Coimbatore, Tamil Nadu, India
  • Department of Electronics and Communication Engineering, Karunya Institute of Technology and Sciences, Coimbatore 641 114, Tamil Nadu, India
Bibliografia
  • [1] Shen Y, Sheng B, Fang R, Li H, Dai L, Stolte S, et al. Domain-invariant interpretable fundus image quality assessment. Med Image Anal 2020;61:101654.
  • [2] Quellec G, Lamard M, Conze P-H, Massin P, Cochener B. Automatic detection of rare pathologies in fundus photographs using few-shot learning. Med Image Anal 2020;61:101660.
  • [3] He Y, Jiao W, Shi Y, Lian J, Zhao B, Zou W, et al. Segmenting diabetic retinopathy lesions in multispectral images using low-dimensional spatial-spectral matrix representation. IEEE J Biomed Health Inf 2020;24(2):493–502.
  • [4] Li L, Xu M, Liu H, Li Y, Wang X, Jiang L, et al. A large-scale database and a CNN model for attention-based glaucoma detection. IEEE Trans Med Imaging 2020;39(2):413–24.
  • [5] Xu X, Zhang L, Li J, Guan Y, Zhang L. A hybrid global-local representation CNN model for automatic cataract grading. IEEE J Biomed Health Inf 2020;24(2):556–67.
  • [6] X. Ren, M. Li, W. Cheng, J. Liu, Joint enhancement and denoising method via sequential decomposition, In: Proc. IEEE International Symposium on Circuits and Systems (ISCAS), Florence, 2018, pp. 1-5.
  • [7] Li M, Liu J, Yang W, Sun X, Guo Z. Structure-revealing lowlight image enhancement via robust retinex model. IEEE Trans Image Process 2018;27(6):2828–41.
  • [8] Huang S, Cheng F, Chiu Y. Efficient contrast enhancement using adaptive gamma correction with weighting distribution. IEEE Trans Image Process 2013;22(3):1032–41.
  • [9] Zheng Y, Lin S, Kambhamettu C, Yu J, Kang SB. Single-image vignetting correction. IEEE Trans Pattern Anal Mach Intell 2009;31(12):2243–56.
  • [10] Kang S, Weiss R. Can we calibrate a camera using an image of a flat textureless lambertian surface? ECCV 2000;2:640–53.
  • [11] Wang S, Zheng J, Hu H, Li B. Naturalness preserved enhancement algorithm for non-uniform illumination images. IEEE Trans Image Process 2013;22(9):3538–48.
  • [12] Tian Q-C, Cohen LD. A variational-based fusion model for non-uniform illumination image enhancement via contrast optimization and color correction. Signal Process 2018;153:210–20.
  • [13] Zhou M, Jin K, Wang S, Ye J, Qian D. Color retinal image enhancement based on luminosity and contrast adjustment. IEEE Trans Biomed Eng 2018;65(3):521–7.
  • [14] Fu X, Liao Y, Zeng D, Huang Y, Zhang X, Ding X. A probabilistic method for image enhancement with simultaneous illumination and reflectance estimation. IEEE Trans Image Process 2015;24(12):4965–77.
  • [15] Lu Y, Xie F, Wu Y, Jiang Z, Meng R. No reference uneven illumination assessment for dermoscopy images. IEEE Signal Process Lett 2015;22(5):534–8.
  • [16] Xie F, Lu Y, Bovik AC, Jiang Z, Meng R. Application-driven noreference quality assessment for dermoscopy images with multiple distortions. IEEE Trans Biomed Eng 2016;63 (6):1248–56.
  • [17] Zhang Y, Liu H, Huang N, Wang Z, Dynamical stochastic resonance for non-uniform illumination image enhancement, IET Image Process, 2018:12(12);2147-2152.
  • [18] Zhao Y, Liu Y, Xie J, Zhang H, Zheng Y, Zhao Y, et al. Retinal Vascular Network Topology Reconstruction, and Artery/Vein Classification via Dominant Set Clustering. IEEE Trans Med Imaging 2020;39(2):341–56.
  • [19] Masakazu Nakano, Yoko Ikeda, Yuichi Tokuda, Masahiro Fuwa, Natsue Omi, Morio Ueno, et al., Common variants in CDKN2B-AS1 associated with optic-nerve vulnerability of glaucoma identified Deep learning algorithm predicts diabetic retinopathy progression in individual patients, PLoS One, 2019:7(3).
  • [20] Imran Qureshi, Jun Ma, Kashif Shaheed, A hybrid proposed fundus image enhancement framework for diabetic retinopathy, Algorithms, 2019:12(1).
  • [21] Singh N, Kaur L, Singh K. Histogram equalization techniques for enhancement of low radiance retinal images for early detection of diabetic retinopathy. Eng Sci Technol 2019;22 (3):736–45.
  • [22] Shengchun Long, Xiaoxiao Huang, Zhiqing Chen, Shahina Pardhan, Dingchang Zheng, Automatic detection of hard exudates in color retinal images using dynamic threshold and SVM classification: algorithm development and evaluation, 2019:13.
  • [23] Yang Wang, Yang Cao, Zheng-Jun Zha, Jing Zhang, Zhiwei Xiong, Wei Zhang, et al., Progressive retinex: mutually reinforced illumination-noise perception network for low light image enhancement, Comput Vision Pattern Recogn, 2019.
  • [24] Uribe-Valencia LJ, Martínez-Carballido JF. Automated optic disc region location from fundus images: using local multi-level thresholding, best channel selection, and an intensity profile model. Biomed Signal Process Control 2019;51:148–61.
  • [25] Jaakko Sahlsten, Joel Jaskari, Jyri Kivinen, Lauri Turunen, Esa Jaanio, Kustaa Hietala, et al., Deep learning fundus image analysis for diabetic retinopathy and macular edema grading, Sci Rep, 2019.
  • [26] Li F, Liu Z, Chen H, Jiang M, Zhang X, Wu Z. Automatic detection of diabetic retinopathy in retinal fundus photographs based on deep learning algorithm. Transl Vis Sci Technol 2019;8(6):4.
  • [27] Rajalakshmi R, Subashini R, Anjana RM, Mohan V. Automated diabetic retinopathy detection in smartphonebased fundus photography using artificial intelligence. Eye 2018:1138–44.
  • [28] Sonali, Sima Sahu, Amit Kumar Singh, Ghrera SP, Mohamed Elhoseny, An approach for de-noising and contrast enhancement of retinal fundus image using CLAHE, Opt Laser Technol, 2019:110;87-98.
  • [29] Arulmozhivarman Pachiyappan, Undurti N Das, Tatavarti Vsp Murthy, Rao Tatavarti, Automated diagnosis of diabetic retinopathy and glaucoma using fundus and OCT images, Lipids Health Dis, 2012.
  • [30] Shamsudeen FM, Raju G. An objective function based technique for devignetting fundus imagery using MST. Inf Med Unlocked 2019;14:82–91.
  • [31] Pruthi J, Khanna K, Arora S. Optic Cup segmentation from retinal fundus images using Glowworm Swarm Optimization for glaucoma detection. Biomed Signal Process Control 2020;60.
  • [32] Shankar K, Abdul Rahaman Wahab Sait, Deepak Gupta, Lakshmanaprabu SK, Ashish Khanna, Hari Mohan Pandey, Automated detection and classification of fundus diabetic retinopathy images using synergic deep learning model, Pattern Recogn Lett, 2020:133;210-6.
  • [33] Youssif AAHAR, Ghalwash AZ, Ghoneim AASAR, Optic disc detection from normalized digital fundus images by means of a vessels direction matched filter, IEEE Trans Med Imag, 2008:27(1);11-8.
  • [34] Fan CN, Zhang FY. Homomorphic filtering based illumination normalization method for face recognition. Pattern Recogn Lett 2011;32(10):1468–79.
  • [35] Timischl FJS. The contrast-to-noise ratio for image quality evaluation in scanning electron microscopy. Scanning 2015;37:54–62.
  • [36] Wang Z, Bovik AC, Sheikh HR, Simoncelli EP. Image quality assessment: from error visibility to structural similarity. IEEE Trans Image Process 2004;13(4):600–12.
  • [37] Ma J, Qureshi I, Abbas Q. Recent development on detection methods for the diagnosis of diabetic retinopathy. Symmetry 2019;11:1–34.
  • [38] Imran Qureshi, Jun Ma, Muhammad Attique, Muhammad Sharif, Tanzila Saba. Detection of glaucoma based on cup-todisc ratio using fundus images. Int J Intell Syst Technol Appl, 2020:19(1).
  • [39] Alshayeji M, Al-Roomi SA, Abed SE. Optic disc detection in retinal fundus images using gravitational law-based edge detection. Med Biol Eng Compu 2017;55:935–48.
  • [40] Zhongming Luo, Zhuofu Liu, Junfu Li.Micro. Blood vessel detection using K-means clustering and morphological thinning. Adv Neural Netw – ISNN 2011, 2011:6677.
Uwagi
Opracowanie rekordu ze środków MNiSW, umowa Nr 461252 w ramach programu "Społeczna odpowiedzialność nauki" - moduł: Popularyzacja nauki i promocja sportu (2021).
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-f7a6a2e8-0e79-4fec-9cd0-c705e3c98647
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.