PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Tytuł artykułu

Application of convolutional gated recurrent units u-net for distinguishing between retinitis pigmentosa and cone–rod dystrophy

Treść / Zawartość
Identyfikatory
Warianty tytułu
Języki publikacji
EN
Abstrakty
EN
Artificial Intelligence (AI) has gained a prominent role in the medical industry. The rapid development of the computer science field has caused AI to become a meaningful part of modern healthcare. Image-based analysis involving neural networks is a very important part of eye diagnoses. In this study, a new approach using Convolutional Gated Recurrent Units (GRU) U-Net was proposed for the classifying healthy cases and cases with retinitis pigmentosa (RP) and cone–rod dystrophy (CORD). The basis for the classification was the location of pigmentary changes within the retina and fundus autofluorescence (FAF) pattern, as the posterior pole or the periphery of the retina may be affected. The dataset, gathered in the Chair and Department of General and Pediatric Ophthalmology of Medical University in Lublin, consisted of 230 ultra-widefield pseudocolour (UWFP) and ultra-widefield FAF images, obtained using the Optos 200TX device (Optos PLC). The data were divided into three categories: healthy subjects (50 images), patients with CORD (48 images) and patients with RP (132 images). For applying deep learning classification, which rely on a large amount of data, the dataset was artificially enlarged using augmentation involving image manipulations. The final dataset contained 744 images. The proposed Convolutional GRU U-Net network was evaluated taking account of the following measures: accuracy, precision, sensitivity, specificity and F1. The proposed tool achieved high accuracy in a range of 91.00%–97.90%. The developed solution has a great potential in RP diagnoses as a supporting tool.
Rocznik
Strony
513--505
Opis fizyczny
Bibliogr. 38 poz., rys., tab., wykr.
Twórcy
  • Faculty of Electrical Engineering and Computer Science, Department of Computer Science, Lublin University of Technology, Nadbystrzycka 38D, 20-618 Lublin, Poland
  • Faculty of Electrical Engineering and Computer Science, Department of Computer Science, Lublin University of Technology, Nadbystrzycka 38D, 20-618 Lublin, Poland
  • Faculty of Medicine, Chair and Department of General and Pediatric Ophthalmology, Medical University of Lublin, Chmielna 1, 20-079, Lublin, Poland
  • Faculty of Medicine, Chair and Department of General and Pediatric Ophthalmology, Medical University of Lublin, Chmielna 1, 20-079, Lublin, Poland
Bibliografia
  • 1. Abeysinghe A, Tohmuang S, Davy J L, Fard M. Data augmentation on convolutional neural networks to classify mechanical noise. Appl. Acoust. 2023:203:109209.
  • 2. Alomar K, Aysel H I, Cai X. Data augmentation in classification and segmentation: A survey and new strategies. J. Imaging. 2023; 9(2): 46.
  • 3. Azad R, Asadi-Aghbolaghi M, Fathy M, Escalera S. Bi-directional ConvLSTM U-Net with densley connected convolutions. 2019. Proc - IEEE/CVF international conference on computer vision workshops.
  • 4. Baratloo A, Hosseini M, Negida A, El Ashal, G. Part 1: Simple defini-tion and calculation of accuracy, sensitivity and specificity. Emergen-cy. 2015;3(2):48-49.
  • 5. Berger W, Kloeckener-Gruissem B, Neidhardt J. The molecular basis of human retinal and vitreoretinal diseases. Prog Retin Eye Res. 2010;29(5):335–75.
  • 6. Bonnici E, Arn P. The impact of Data Augmentation on classification accuracy and training time in Handwritten Character Recognition. Kth Royal Institute of Technology. 2021.
  • 7. Brancati N, Frucci M, Gragnaniello D, Riccio D, Di Iorio V, Di Perna L. Automatic segmentation of pigment deposits in retinal fundus im-ages of Retinitis Pigmentosa. Comput. Med. Imag. Graph. 2018;66:73-81.
  • 8. Brancati N, Frucci M, Gragnaniello D, Riccio D, Di Iorio V, Di Perna L, Simonelli F. Learning-based approach to segment pigment signs in fundus images for retinitis pigmentosa analysis. Neurocomputing. 2018,308:159-171.
  • 9. Chen JX, Jiang DM, Zhang YN, A hierarchical bidirectional GRU model with attention for EEG-based emotion classification, IEEE Ac-cess. 2019;7:118530-118540.
  • 10. Das H, Saha A, Deb S. An expert system to distinguish a defective eye from a normal eye. Proc - 2014 International Conference on Is-sues and Challenges in Intelligent Computing Techniques (ICICT). IEEE. 2014:155-158.
  • 11. Fahim AT, Daiger SP, Weleber RG. Nonsyndromic retinitis pigmen-tosa overview. 2017: Adam MP, Ardinger HH, Pagon RA, Wallace SE, Bean LJH, Stephens K, Amemiya A, eds.Gene Reviews. Seattle: University of Washington.
  • 12. Gill JS, Georgiou M, Kalitzeos A, Moore AT, Michaelides M. Progres-sive cone and cone-rod dystrophies: Clinical features, molecular ge-netics and prospects for therapy. Br. J. Ophthalmol. 2019;103(5): 711-720.
  • 13. Graves A, Mohamed AR, Hinton G. Speech recognition with deep recurrent neural networks. Proc - IEEE Int. Conf. Acoust., Speech Signal Process. 2013;38:6645–6649.
  • 14. Guo C, Yu M, Li J. Prediction of different eye diseases based on fundus photography via deep transfer learning. J. Clin. Med. 2021;10(23):5481.
  • 15. Huang G, Liu Z, Van Der Maaten L, Weinberger KQ. Densely con-nected convolutional networks. Proc - IEEE conference on computer vision and pattern recognition. 2017:4700-4708.
  • 16. Hartong DT, Berson EL, Dryja TP. Retinitis pigmentosa. Lancet. 200618;368(9549):1795-809. doi: 10.1016/S0140-6736(06)69740-7
  • 17. Ioffe S, Szegedy C. Batch normalization: Accelerating deep network training by reducing internal covariate shift. Proc - International con-ference on machine learning. 2015:448-456.
  • 18. Jain L, Murthy H S, Patel C, Bansal D. Retinal eye disease detection using deep learning. Proc - Fourteenth International Conference on Information Processing (ICINPRO). IEEE. 2018:1-6.
  • 19. LeCun Y, Bengio Y, Hinton G. Deep learning. Nature. 2015;521(7553):436-444.
  • 20. Liu T Y A, Ling C, Hahn L, Jones C K, Boon C J, Singh M S. Predic-tion of visual impairment in retinitis pigmentosa using deep learning and multimodal fundus images. Br. J. Ophthalmol. 2022.
  • 21. Masumot H, Tabuchi H, Nakakura S, Ohsugi H, Enno H, Ishitobi, N., Ohsugi E, Mitamura Y. Accuracy of a deep convolutional neural net-work in detection of retinitis pigmentosa on ultrawide-field images. PeerJ. 2019;7:e6900.
  • 22. Merin S, Auerbach E. Retinitis pigmentosa. Surv. of Ophthalmol. 1976; 20(5):303-46. doi: 10.1016/s0039-6257(96)90001-6
  • 23. Monaghan T F, Rahman S N, Agudelo C W, Wein A J, Lazar J M, Everaert K, Dmochowski R R. Foundational statistical principles in medical research: sensitivity, specificity, positive predictive value, and negative predictive value. Medicina. 2021;57(5):503.
  • 24. Oishi A, Miyata M, Numa S, Otsuka Y, Oishi M, Tsujikawa A. Wide-field fundus autofluorescence imaging in patients with hereditary reti-nal degeneration: a literature review. Int. J. of Retina Vitr. 2019;12(5) (Suppl 1):23. https://doi.org/10.1186/s40942-019-0173-z.
  • 25. Piri N, Grodsky JD, Kaplan HJ. Gene therapy for retinitis pigmentosa. Taiwan J. Ophthalmol. 2021;11(4):348-351.
  • 26. RetNet Retinal Information Network. https://sph.uth.edu/retnet/ [6.06.2023]
  • 27. Robson AG, Egan CA, Luong VA, Bird AC, Holder GE, Fitzke FW. Comparison of FAF with photopic and scotopic fine-matrix mapping in patients with retinitis pigmentosa and normal visual acuity. Invest. Ophthalmol. Vis. Sci. 2004;45(11):4119-4125.
  • 28. Romo-Bucheli D, Erfurth U S, Bogunović, H. End-to-end deep learn-ing model for predicting treatment requirements in neovascular AMD from longitudinal retinal OCT imaging. IEEE J. Biomed. Health In-form. 2020;24(12):3456-3465.
  • 29. Ronneberger O, Fischer P, Brox T. U-net: Convolutional networks for biomedical image segmentation. Proc - 18th International Confer-ence, Munich, Germany. October 5-9. Proceedings. Part III. 2015; 18:234-241. Springer International Publishing.
  • 30. Schmitz-Valckenberg S, Holz FG, Bird AC, Spaide RF. Fundus autofluorescence imaging: review and perspectives. Retina. 2008;28(3):385-409.
  • 31. Shi X, Chen Z, Wang H, Yeung DY, Wong WK, Woo WC. Convolu-tional LSTM network: A machine learning approach for precipitation nowcasting. Adv. Neural. Inf. Process. Syst. 2015;28.
  • 32. Shorten C, Khoshgoftaar T M. A survey on image data augmentation for deep learning. J. Big Data. 2019;6(1):1-48.
  • 33. Skublewska-Paszkowska M, Powroznik P. Temporal Pattern Atten-tion for Multivariate Time Series of Tennis Strokes Classification. Sensors. 2023;23(5):2422.
  • 34. Song H, Wang W, Zhao S, Shen J, Lam KM. Pyramid dilated deeper convlstm for video salient object detection. Proc - European confer-ence on computer vision (ECCV). 2018:715-731.
  • 35. Sun G, Wang X. Xu L. Li C. Wang W. Yi, Z., Luo H, Su Y, Zheng J, Li Z, Chen Z, Zheng H, Chen, C. Deep learning for the detection of mul-tiple fundus diseases using ultra-widefield images. Ophthalmol. Ther. 2023;12(2):895-907. https://doi.org/10.1007/s40123-022-00627-3
  • 36. Tee JJ, Smith AJ, Hardcastle AJ, Michaelides M. RPGR-associated retinopathy: clinical features, molecular genetics, animal models and therapeutic options. Br J Ophthalmolol. 2016;100(8):1022-7. doi: 10.1136/bjophthalmol-2015-307698
  • 37. Wong S C, Gatt A, Stamatescu V, McDonnell M D. Understanding data augmentation for classification: when to warp? Proc - Interna-tional conference on digital image computing: techniques and appli-cations. IEEE. 2016:1-6.
  • 38. Yang S, Xiao W, Zhang M, Guo S, Zhao J, Shen F. Image data augmentation for deep learning: A survey. 2022. arXiv preprint arXiv:2204.08610.
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-28973ed4-d0db-441c-b8e8-7902dae272b2
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.