PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Tytuł artykułu

Comparative Analysis and Fusion of MRI and PET Images based on Wavelets for Clinical Diagnosis

Treść / Zawartość
Identyfikatory
Warianty tytułu
Języki publikacji
EN
Abstrakty
EN
Nowadays, Medical imaging modalities like Magnetic Resonance Imaging (MRI), Positron Emission Tomography (PET), Single Photon Emission Tomography (SPECT), and Computed Tomography (CT) play a crucial role in clinical diagnosis and treatment planning. The images obtained from each of these modalities contain complementary information of the organ imaged. Image fusion algorithms are employed to bring all of this disparate information together into a single image, allowing doctors to diagnose disorders quickly. This paper proposes a novel technique for the fusion of MRI and PET images based on YUV color space and wavelet transform. Quality assessment based on entropy showed that the method can achieve promising results for medical image fusion. The paper has done a comparative analysis of the fusion of MRI and PET images using different wavelet families at various decomposition levels for the detection of brain tumors as well as Alzheimer’s disease. The quality assessment and visual analysis showed that the Dmey wavelet at decomposition level 3 is optimum for the fusion of MRI and PET images. This paper also compared the results of several fusion rules such as average, maximum, and minimum, finding that the maximum fusion rule outperformed the other two.
Rocznik
Strony
867--873
Opis fizyczny
Bibliogr. 25 poz., fot., rys., tab., wykr.
Twórcy
  • Sahrdaya College of Engineering and Technology, Thrissur, Kerala, India under APJ Abdul Kalam Technological University
autor
  • Sahrdaya College of Engineering and Technology, Thrissur, Kerala, India under APJ Abdul Kalam Technological University
Bibliografia
  • [1] Chengazi, G. Flux, and G. Cook, “Image registration,” Clinical Nuclear Medicine Fourth Edition. pp. 861-867, 2006, https://doi.org/10.1201/b13348-88.
  • [2] P. James and B. V. Dasarathy, “Medical image fusion: A survey of the state of the art,” Inf. Fusion, vol. 19, no. 1, pp. 4-19, 2014, https://doi.org/10.1016/j.inffus.2013.12.002.
  • [3] D. K. Sahu and M. P. Parsai, “Different Image Fusion Techniques - A Critical Review,” Int. J. Mod. Eng. Res., vol. 2, no. 5, pp. 4298-4301, 2012.
  • [4] M. Haddadpour, S. Daneshavar, and H. Seyedarabi, “PET and MRI image fusion based on a combination of 2-D Hilbert transform and HIS method,” Sci. Biomed. J., vol. 12, no. 6, pp. 1-7, 2017, https://doi.org/10.1016/j.bj.2017.05.002.
  • [5] O. S. Mishra and S. Bhatnagar, “MRI and CT Image Fusion Based on Wavelet Transform,” Int. J. Bio-Science Bio-Technology, vol. 6, no. 3, pp. 149-162, 2014, https://doi.org/10.14257/ijbsbt.2014.6.3.18.
  • [6] K. Chaitanya, G. S. Reddy, V. Bhavana, and G. S. C. Varma, “PET and MRI medical image fusion using STDCT and STSVD,” in 2017 International Conference on Computer Communication and Informatics, ICCCI 2017, 2017, pp. 5-8, https://doi.org/10.1109/ICCCI.2017.8117685.
  • [7] Ashwanth and K. Veera Swamy, “Medical Image Fusion using Transform Techniques,” ICDCS 2020 - 2020 5th International Conference on Devices, Circuits and Systems, no. 2. pp. 303-306, 2020, https://doi.org/10.1109/ICDCS48716.2020.243604[8] P. Tank, D. D. Shah, T. V Vyas, and S. B. Chotaliya, “Image Fusion Based On Wavelet And Curvelet Transform,” IOSR J. VLSI Signal Process., vol. 1, no. 5, pp. 32-36, 2013, https://doi.org/10.9790/4200-0153236.
  • [9] F. Shabanzade and H. Ghassemian, “Combination of wavelet and contourlet transform for PET and MRI image fusion,” in 19th CSI International Symposium on Artificial Intelligence and Signal Processing, AISP 2017, 2017, vol. 2018-Janua, pp. 178-183, https://doi.org/10.1109/AISP.2017.8324077.
  • [10] L. da Cunha, J. Zhou, and M. N. Do, "The nonsubsampled contourlet transform Theory, design, and applications," IEEE Trans. Image
  • Process., vol. 15, no. 10, pp. 3089-3101, 2006, https://doi.org/10.1109/TIP.2006.877507.
  • [11] Rajalingam, R. Priya, and R. Bhavani, “Hybrid Multimodal Medical Image Fusion Using a Combination of Hybrid Multimodal Medical Image Fusion Using Combination of Transform Techniques for Disease Analysis Transform Techniques for Disease Analysis,” in Procedia Computer Science, 2019, vol. 152, pp. 150-157, https://doi.org/10.1016/j.procs.2019.05.037.
  • [12] Yang, Y. Wu, Y. Wang, and Y. Xiong, “A novel fusion technique for CT and MRI medical image based on NSST,” in Proceedings of the 28th Chinese Control and Decision Conference, CCDC 2016, 2016, pp. 4367-4372, https://doi.org/10.1109/CCDC.2016.7531752.
  • [13] Wang, M. Zheng, H. Wei, G. Qi, and Y. Li, “Multi-modality medical image fusion using convolutional neural network and contrast pyramid,” Sensors (Switzerland), vol. 20, no. 8, pp. 1-17, 2020, https://doi.org/10.3390/s20082169.
  • [14] R. Balakrishnan, “Multimodal Medical Image Fusion based on Deep Learning Neural Network for Clinical Treatment Analysis,” Int. J. ChemTech Res., vol. 11, no. 6, pp. 160-176, 2018, https://doi.org/10.20902/ijctr.2018.110621.
  • [15] Z. Guo, X. Li, H. Huang, N. Quo, and Q. Li, "Medical image segmentation based on the multi-modal convolutional neural network: Study on image fusion schemes," in Proceedings - International Symposium on Biomedical Imaging, 2018, pp. 903-907, https://doi.org/10.1109/ISBI.2018.8363717.
  • [16] H. Hermessi, O. Mourali, and E. Zagrouba, “Convolutional neural network-based multimodal image fusion via similarity learning in the shearlet domain,” Neural Comput. Appl., vol. 30, no. 7, pp. 2029-2045, 2018, https://doi.org/10.1007/s00521-018-3441-1.
  • [17] Y. Liu, X. Chen, J. Cheng, and H. Peng, “A medical image fusion method based on convolutional neural networks,” in 20th International Conference on Information Fusion, Fusion 2017 - Proceedings, 2017, pp. 18-24, https://doi.org/10.23919/ICIF.2017.8009769.
  • [18] Z. Guo, X. Li, H. Huang, N. Guo, and Q. Li, “Deep Learning-Based Image Segmentation on Multimodal Medical Imaging,” IEEE Trans. Radiat. Plasma Med. Sci., vol. 3, no. 2, pp. 162-169, 2019, https://doi.org/10.1109/trpms.2018.2890359.
  • [19] N. Of and H. Of, “Multi-modal Medical Image Fusion using Convolutional Neural Networks Bachelor thesis,” no. May 1996, 2018.
  • [20] Huang et al., “A New Pulse Coupled Neural Network (PCNN) for Brain Medical Image Fusion Empowered by Shuffled Frog Leaping Algorithm,” Front. Neurosci., vol. 13, no. March, pp. 1-10, 2019, https://doi.org/10.3389/fnins.2019.00210.
  • [21] Kesavan et al., “Fuzzy Logic based Multi-modal Medical Image Fusion of MRI-PET Images,” Int. J. Sci. Technol. Eng.|, vol. 2, no. 10, pp. 268-271, 2016.
  • [22] Sebastian and G. R. G. King, "Fusion of Multimodality Medical Images-A Review," 2021 Smart Technologies, Communication and Robotics (STCR), 2021, pp. 1-6, https://doi.org/10.1109/STCR51658.2021.9588882.
  • [23] J. Sebastian, G.R Gnana King,. (2022). Analysis of MRI and SPECT Image Fusion in the Wavelet Domain for Brain Tumor Detection. In: Rout, R.R., Ghosh, S.K., Jana, P.K., Tripathy, A.K., Sahoo, J.P., Li, KC. (eds) Advances in Distributed Computing and Machine Learning. Lecture Notes in Networks and Systems, vol 427. Springer, Singapore. https://doi.org/10.1007/978-981-19-1018-0_53.
  • [24] M. Misiti and J. Poggi, Wavelet Toolbox TM 4 User's Guide. MATLAB, 2009.
  • [25] P. Jagalingam and A. Vittal, “A Review of Quality Metrics for Fused Image,” Aquat. Procedia, vol. 4, no. Icwrcoe, pp. 133-142, 2015, https://doi.org/10.1016/j.aqpro.2015.02.019.
Uwagi
Opracowanie rekordu ze środków MEiN, umowa nr SONP/SP/546092/2022 w ramach programu "Społeczna odpowiedzialność nauki" - moduł: Popularyzacja nauki i promocja sportu (2022-2023).
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-c9865162-c336-4f22-bb13-07216f6ee252
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.