PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Tytuł artykułu

Classification-Segmentation Pipeline for MRI via Transfer Learning and Residual Networks

Wybrane pełne teksty z tego czasopisma
Identyfikatory
Warianty tytułu
Języki publikacji
EN
Abstrakty
EN
Artificial intelligence association into brain magnetic resonance imaging (MRI) and clinical practices embracesubstantial cancer diagnosis improvement. The advancement ofdeep learning has improved the processing and analysis of MRI,boosting models' performance, decreasing the destructive effectsof data sources overload, and increasing accurate detection andtime efficacy. However, that specific dataset leads to diverseresearch fields such as image processing and analysis, detection, registration, segmentation, and classification. This paperproposes a decision-making pipeline for MRI data by combiningimage classification and segmentation. First, the pipeline shouldcorrectly produce a correct decision given an MRI image. If thefigure is classified as defective, the pipeline can extract defectregions and highlight them accordingly. We have implementedseveral advanced convolutional neural networks with transferlearning and residual techniques to address two broad clinicalconcerns in one decision-making workflow.
Słowa kluczowe
Rocznik
Tom
Strony
39--43
Opis fizyczny
Bibliogr. 27 poz., fot., tab., wykr.
Twórcy
  • Technische Universität Berlin Straße des 17. Juni 135, 10623 Berlin, Germany
  • Can Tho University of Technology 74000 Can Tho city, Vietnam
  • Can Tho University 74000 Can Tho city, Vietnam
Bibliografia
  • 1. L. da Cruz, C. Sierra-Franco, G. Silva-Calpa, and A. Raposo, “Enabling autonomous medical image data annotation: A human-in-the-loop reinforcement learning approach,” in Proceedings of the 16th Conference on Computer Science and Intelligence Systems, ser. Annals of Computer Science and Information Systems, M. Ganzha, L. Maciaszek, M. Paprzycki, and D. Ślęzak, Eds., vol. 25. IEEE, 2021, p. 271-279.
  • 2. J. Dörpinghaus, S. Schaaf, V. Weil, and T. Hübenthal, “An efficient approach towards the generation and analysis of interoperable clinical data in a knowledge graph,” in Proceedings of the 16th Conference on Computer Science and Intelligence Systems, ser. Annals of Computer Science and Information Systems, M. Ganzha, L. Maciaszek, M. Paprzycki, and D. Ślęzak, Eds., vol. 25. IEEE, 2021, p. 59-68.
  • 3. R. Damaševičius, O. Abayomi-Alli, R. Maskeliūnas, and A. AbayomiAlli, “Bilstm with data augmentation using interpolation methods to improve early detection of parkinson disease,” in Proceedings of the 2020 Federated Conference on Computer Science and Information Systems, ser. Annals of Computer Science and Information Systems, S. Agarwal, D. N. Barrell, and V. K. Solanki, Eds. IEEE, 2020, pp. 371-380.
  • 4. M. L. Giger, “Machine learning in medical imaging,” Journal of the American College of Radiology, vol. 15, no. 3, pp. 512-520, 2018.
  • 5. N. Duong-Trung, X. N. Hoang, T. B. T. Tu, K. N. Minh, V. U. Tran, and T.-D. Luu, “Blueprinting the workflow of medical diagnosis through the lens of machine learning perspective,” in 2019 International Conference on Advanced Computing and Applications (ACOMP). IEEE, 2019, pp. 23-26.
  • 6. N. Duong-trung, N. Quynh, T. Tang, and X. S. Ha, “Interpretation of Machine Learning Models for Medical Diagnosis,” Advances in Science, Technology and Engineering Systems Journal, vol. 5, no. 5, pp. 469-477, 2020.
  • 7. M. Fairley, D. Scheinker, and M. L. Brandeau, “Improving the efficiency of the operating room environment with an optimization and machine learning model,” Health care management science, vol. 22, no. 4, pp. 756-767, 2019.
  • 8. H. Benbrahim, H. Hachimi, and A. Amine, “Deep convolutional neural network with tensorflow and keras to classify skin cancer images,” Scalable Computing: Practice and Experience, vol. 21, no. 3, pp. 379-390, 2020.
  • 9. S. Kusuma and J. D. Udayan, “Analysis on deep learning methods for ecg based cardiovascular disease prediction,” Scalable Computing: Practice and Experience, vol. 21, no. 1, pp. 127-136, 2020.
  • 10. A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” Advances in neural information processing systems, vol. 25, pp. 1097-1105, 2012.
  • 11. T. Kaur and T. K. Gandhi, “Deep convolutional neural networks with transfer learning for automated brain image classification,” Machine Vision and Applications, vol. 31, no. 3, pp. 1-16, 2020.
  • 12. M. Mittal, M. Arora, T. Pandey, and L. M. Goyal, “Image segmentation using deep learning techniques in medical images,” in Advancement of machine intelligence in interactive medical image analysis. Springer, 2020, pp. 41-63.
  • 13. Z. Akkus, A. Galimzianova, A. Hoogi, D. L. Rubin, and B. J. Erickson, “Deep learning for brain mri segmentation: state of the art and future directions,” Journal of digital imaging, vol. 30, no. 4, pp. 449-459, 2017.
  • 14. G. Liang and L. Zheng, “A transfer learning method with deep residual network for pediatric pneumonia diagnosis,” Computer methods and programs in biomedicine, vol. 187, p. 104964, 2020.
  • 15. W. Ying, Y. Zhang, J. Huang, and Q. Yang, “Transfer learning via learning to transfer,” in International Conference on Machine Learning. PMLR, 2018, pp. 5085-5094.
  • 16. Q. Yang, Y. Zhang, W. Dai, and S. J. Pan, Transfer learning. Cambridge University Press, 2020.
  • 17. N. Duong-Trung, L.-D. Quach, and C.-N. Nguyen, “Learning deep transferability for several agricultural classification problems,” International Journal of Advanced Computer Science and Applications, vol. 10, no. 1, 2019.
  • 18. N. Duong-Trung, L.-D. Quach, M.-H. Nguyen, and C.-N. Nguyen, “A combination of transfer learning and deep learning for medicinal plant classification,” in Proceedings of the 2019 4th International Conference on Intelligent Information Technology, 2019, pp. 83-90.
  • 19. N. Duong-Trung, L.-D. Quach, M.-H. Nguyen, and C.-N. Nguyen, “Classification of grain discoloration via transfer learning and convolutional neural networks,” in Proceedings of the 3rd International Conference on Machine Learning and Soft Computing, 2019, pp. 27-32.
  • 20. N. Abraham and N. M. Khan, “A novel focal tversky loss function with improved attention u-net for lesion segmentation,” in 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019). IEEE, 2019, pp. 683-687.
  • 21. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770-778.
  • 22. C. Szegedy, S. Ioffe, V. Vanhoucke, and A. Alemi, “Inception-v4, inception-resnet and the impact of residual connections on learning,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 31, no. 1, 2017.
  • 23. M. Tan and Q. Le, “Efficientnet: Rethinking model scaling for convolutional neural networks,” in International Conference on Machine Learning. PMLR, 2019, pp. 6105-6114.
  • 24. M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L.-C. Chen, “Mobilenetv2: Inverted residuals and linear bottlenecks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 4510-4520.
  • 25. D. Sarkar, R. Bali, and T. Ghosh, Hands-On Transfer Learning with Python: Implement advanced deep learning and neural network models using TensorFlow and Keras. Packt Publishing Ltd, 2018.
  • 26. F. I. Diakogiannis, F. Waldner, P. Caccetta, and C. Wu, “Resunet-a: a deep learning framework for semantic segmentation of remotely sensed data,” ISPRS Journal of Photogrammetry and Remote Sensing, vol. 162, pp. 94-114, 2020.
  • 27. N. Duong-Trung, Social Media Learning: Novel Text Analytics for Geolocation and Topic Modeling. Cuvillier Verlag, 2017.
Uwagi
Opracowanie rekordu ze środków MEiN, umowa nr SONP/SP/546092/2022 w ramach programu "Społeczna odpowiedzialność nauki" - moduł: Popularyzacja nauki i promocja sportu (2022-2023).
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-6c94482c-6f2f-45ac-9cd3-d00aa21cf445
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.