PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Tytuł artykułu

The automatic focus segmentation of multi-focus image fusion

Autorzy
Treść / Zawartość
Identyfikatory
Warianty tytułu
Języki publikacji
EN
Abstrakty
EN
Multi-focus image fusion is a method of increasing the image quality and preventing image redundancy. It is utilized in many fields such as medical diagnostic, surveillance, and remote sensing. There are various algorithms available nowadays. However, a common problem is still there, i.e. the method is not sufficient to handle the ghost effect and unpredicted noises. Computational intelligence has developed quickly over recent decades, followed by the rapid development of multi-focus image fusion. The proposed method is multi-focus image fusion based on an automatic encoder-decoder algorithm. It uses deeplabV3+ architecture. During the training process, it uses a multi-focus dataset and ground truth. Then, the model of the network is constructed through the training process. This model was adopted in the testing process of sets to predict the focus map. The testing process is semantic focus processing. Lastly, the fusion process involves a focus map and multi-focus images to configure the fused image. The results show that the fused images do not contain any ghost effects or any unpredicted tiny objects. The assessment metric of the proposed method uses two aspects. The first is the accuracy of predicting a focus map, the second is an objective assessment of the fused image such as mutual information, SSIM, and PSNR indexes. They show a high score of precision and recall. In addition, the indexes of SSIM, PSNR, and mutual information are high. The proposed method also has more stable performance compared with other methods. Finally, the Resnet50 model algorithm in multi-focus image fusion can handle the ghost effect problem well.
Rocznik
Strony
art. no. e140352
Opis fizyczny
Bibliogr. 34 poz., rys., tab.
Twórcy
autor
  • Universiti Malaysia Pahang, Faculty of Electrical and Electronics Engineering, 26300 Kuantan, Malaysia
  • Politeknik Negeri Padang, Electrical Engineering Department, 25162, Padang, Indonesia
autor
  • Universiti Malaysia Pahang, Faculty of Electrical and Electronics Engineering, 26300 Kuantan, Malaysia
Bibliografia
  • [1] X. Zhang, “Multi-focus Image Fusion: A Benchmark”, arXiv preprint arXiv:2005. 01116.
  • [2] E. Kot, K.S.Z. Krawczyk, L. Królicki, and P. Czwarnowski, “Deep learning based framework for tumour detection and semantic segmentation”, Bull. Pol. Acad. Sci. Tech. Sci., vol. 69, no. 3, p. e136750, 2021.
  • [3] B. Huang, F. Yang, M. Yin, X. Mo, and C. Zhong, “A Review of Multimodal Medical Image Fusion Techniques”, Comput. Math. Methods Med., vol. 2020, pp. 1–16, 2020.
  • [4] K. Kulpa, M. Malanowski, J. Misiurewicz, and P. Samczynski, “Radar and optical images fusion using stripmap SAR data with multilook processing”, Int. J. Electron. Telecommun., vol. 57, no. 1, pp. 37–42, 2011.
  • [5] Y. Liu, X. Chen, H. Peng, and Z. Wang, “Multi-focus image fusion with a deep convolutional neural network”, Inf. Fusion, vol. 36, pp. 191–207, 2017.
  • [6] Ismail and K.H. Bin Ghazali, “The Multifocus Images Fusion Based on a Generative Gradient Map”, Lect. Notes Electr. Eng., vol. 632, pp. 401–413, 2020.
  • [7] Ismail and K. Hawari, “Multi Focus Image Fusion with Region-Center Based Kernel”, Int. J. Adv. Sci. Eng. Inf. Technol., vol. 11, no. 1, pp. 57–63, 2021.
  • [8] K. Hawari, “The Normalized R and om Map of Gradient for Generating Multifocus Image Fusion”, Int. J. Recent Technol. Eng., vol. 9, no. 1, pp. 1063–1069, 2020.
  • [9] X. Zhang, X. Li, and Y. Feng, “A new multifocus image fusion based on spectrum comparison”, Signal Process., vol. 123, pp. 127–142, 2016.
  • [10] V.N. Gangapure, S. Banerjee, and A.S. Chowdhury, “Steerable local frequency based multispectral multifocus image fusion”, Inf. Fusion, vol. 23, pp. 99–115, 2015.
  • [11] A. Jameel, A. Ghafoor, and M.M. Riaz, “Wavelet and guided filter based multifocus fusion for noisy images”, Optik (Stuttg)., vol. 126, no. 23, pp. 3920–3923, 2015.
  • [12] S. Bhat and D. Koundal, “Multi-focus image fusion techniques: a survey”, Artif. Intell. Rev., vol. 54, no. 6, pp. 1–53, 2021.
  • [13] H. Tang, B. Xiao, W. Li, and G. Wang, “Pixel convolutional neural network for multi-focus image fusion”, Inf. Sci., vol. 433–434, pp. 125–141, 2018.
  • [14] M. Amin-Naji, A. Aghagolzadeh, and M. Ezoji, “Ensemble of CNN for multi-focus image fusion”, Inf. Fusion, vol. 51, pp. 201–214, 2019.
  • [15] Z. Krawczyk and J. Starzyński, “Segmentation of bone structures with the use of deep learning techniques”, Bull. Pol. Acad. Sci. Tech. Sci., vol. 69, no. 3, p. e136751, 2021.
  • [16] H. Li, Y. Chai, and Z. Li, “A new fusion scheme for multifocus images based on focused pixels detection”, Mach. Vis. Appl., vol. 24, no. 6, pp. 1167–1181, 2013.
  • [17] H. Li, X. Liu, Z. Yu, and Y. Zhang, “Performance improvement scheme of multifocus image fusion derived by difference images”, Signal Process., vol. 128, pp. 474–493, 2016.
  • [18] H. Li, H. Qiu, Z. Yu, and B. Li, “Multifocus image fusion via fixed window technique of multiscale images and non-local means filtering”, Signal Process., vol. 138, pp. 71–85, 2017.
  • [19] Y. Yang, W. Zheng, and S. Huang, “Effective multifocus image fusion based on HVS and BP neural network”, Sci. World J., vol. 2014, p. 281073, 2014.
  • [20] R. Hong, C. Wang, Y. Ge, M. Wang, X. Wu, and R. Zhang, “Salience preserving multi-focus image fusion”, Proc. 2007 IEEE Int. Conf. Multimed. Expo, ICME 2007, 2007, pp. 1663–1666.
  • [21] X. Bai, M. Liu, Z. Chen, P. Wang, and Y. Zhang, “Multi-Focus Image Fusion Through Gradient-Based Decision Map Construction and Mathematical Morphology”, IEEE Access, vol. 4, no. 1, pp. 4749–4760, 2016.
  • [22] Y. Lecun, Y. Bengio, and G. Hinton, “Deep learning”, Nature, vol. 521, no. 7553, pp. 436–444, 2015.
  • [23] R. Kapela, “Texture recognition system based on the Deep Neural Network”, Bull. Polish Acad. Sci. Tech. Sci., vol. 68, no. 6, pp. 1503–1511, 2020.
  • [24] A. Świetlicka and K. Kolanowski, “Robot sensor failure detection system based on convolutional neural networks for calculation of Euler angles”, Bull. Polish Acad. Sci. Tech. Sci., vol. 68, no. 6, pp. 1525–1533, 2020.
  • [25] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition”, Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., 2016, pp. 770–778.
  • [26] M. Nejati, S. Samavi, and S. Shirani, “Multi-focus image fusion using dictionary-based sparse representation”, Inf. Fusion, vol. 25, pp. 72–84, 2015.
  • [27] “Create DeepLab v3+ convolutional neural network for semantic image segmentation – MATLAB deeplabv3plusLayers”. [On-line]. Available: https://www.mathworks.com/help/vision/ref/deeplabv3pluslayers.html. [Accessed: 15-Jul-2021].
  • [28] L.C. Chen, Y. Zhu, G. Pap, R.F. Schroff, and H. Adam, “Encoder-decoder with atrous separable convolution for semantic image segmentation”, Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 11211 LNCS, pp. 833–851, 2018.
  • [29] “Semantic Segmentation – MATLAB & Simulink”. [Online]. Available: https://www.mathworks.com/solutions/image-video-processing/semantic-segmentation.html. [Accessed: 16-Jul-2021].
  • [30] S. Paul, I.S. Sevcenco, and P. Agathoklis, “Multi-exposure and multi-focus image fusion in gradient domain”, J. Circuits, Syst. Comput., vol. 25, no. 10, pp. 1–18, 2016.
  • [31] Y. Zhang, X. Bai, and T. Wang, “Boundary finding based multi-focus image fusion through multi-scale morphological focus-measure”, Inf. Fusion, vol. 35, pp. 81–101, 2017.
  • [32] S. Paul, I.S. Sevcenco, and P. Agathoklis, “Multi-exposure and multi-focus image fusion in gradient domain”, J. Circuits, Syst. Comput., vol. 25, no. 10, 2016.
  • [33] Z. Wang, A.C. Bovik, H.R. Sheikh, and E.P. Simoncelli, “Image quality assessment: From error visibility to structural similarity”, IEEE Trans. Image Process., vol. 13, no. 4, pp. 600–612, 2004.
  • [34] S. Xydeas, C. and P. Petrovic, “Objective Image Fusion Performance Measure”, Electron. Lett. vol. 36, no. 4, pp. 308–309, Feb. 2000.
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-f8ffc9e1-7065-4723-8ce8-dd9590730a3f
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.