PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Tytuł artykułu

Riesz-Laplace Wavelet Transform and PCNN Based Image Fusion

Treść / Zawartość
Identyfikatory
Warianty tytułu
Języki publikacji
EN
Abstrakty
EN
Important information perceived by human vision comes from the low-level features of the image, which can be extracted by the Riesz transform. In this study, we propose a Riesz transform based approach to image fusion. The image to be fused is first decomposed using the Riesz transform. Then the image sequence obtained in the Riesz transform domain is subjected to the Laplacian wavelet transform based on the fractional Laplacian operators and the multi-harmonic splines. After Laplacian wavelet transform, the image representations have directional and multi-resolution characteristics. Finally, image fusion is performed, leveraging Riesz-Laplace wavelet analysis and the global coupling characteristics of pulse coupled neural network (PCNN). The proposed approach has been tested in several application scenarios, such as multi-focus imaging, medical imaging, remote sensing full-color imaging, and multi-spectral imaging. Compared with conventional methods, the proposed approach demonstrates superior performance on visual effects, contrast, clarity, and the overall efficiency.
Rocznik
Strony
73--84
Opis fizyczny
Bibliogr. 23 poz., rys., tab.
Twórcy
autor
  • Hubei Key Laboratory of Intelligent Vision Based Monitoring for Hydroelectric Engineering, China Three Gorges University, YiChang, China
  • College of Computer and Information Technology, China Three Gorges University, YiChang, China
  • College of Computer and Information Technology, China Three Gorges University, YiChang, China
  • College of Economics and Management, China Three Gorges University, YiChang, China
  • College of Computer and Information Technology, China Three Gorges University, YiChang, China
autor
  • College of Computer and Information Technology, China Three Gorges University, YiChang, China
  • Hubei Key Laboratory of Intelligent Vision Based Monitoring for Hydroelectric Engineering, China Three Gorges University, YiChang, China
  • College of Computer and Information Technology, China Three Gorges University, YiChang, China
autor
  • Institute of Advanced Studies in Humanities and Social Sciences, Beijing Normal University, Zhuhai, China
Bibliografia
  • [1] European Space Agency and H. Kramer. Satellite Missions catalogue. https://directory.eoportal.org/web/eoportal/satellite-missions.
  • [2] S. Bhat and D. Koundal. Multi-focus image fusion using neutrosophic based wavelet transform. Applied Soft Computing, 106:107307, 2021. doi:10.1016/j.asoc.2021.107307.
  • [3] T. A. Bui and X. T. Duong. Higher-order Riesz transforms of Hermite operators on new Besov and Triebel-Lizorkin spaces. Constructive Approximation, 53:85-120, 2021. doi:10.1007/s00365-019-09493-y.
  • [4] J. Chen, L. Chen, and M. Shabaz. Image fusion algorithm at pixel level based on edge detection. Journal of Healthcare Engineering, page 5760660, 2021. doi:10.1155/2021/5760660.
  • [5] K. J. He, X. Jin, R. Nie, D. M. Zhou, Wang Q., and Yu J. Color image fusion based on simplified PCNN and Laplace pyramid decomposition (in Chinese). Journal of Computer Applications, 2016(S1):133-137, 2016.
  • [6] X. Jin, R. Nie, and D. M. Zhou. Color image fusion researching based on S-PCNN and Laplacinan pyramid. In Proc. 2nd Int. Conf. Cloud Computing and Big Data in Asia CloudCom-Asia 2015, volume 9106 of Lecture Notes in Computer Science, pages 179-188, Huangshan, China, 17-19 Jun 2015. doi:10.1007/978-3-319-28430-9 14.
  • [7] K. A. Johnson and J. A. Becker. The whole brain Atlas. https://www.med.harvard.edu/AANLIB/.Harvard Medical School.
  • [8] W. Li, Q. Liu, K. Wang, and K. Cai. Improving medical image fusion method using fuzzy entropy and no subsampling contourlet transform. International Journal of Imaging Systems and Technology, 31(1):204-214, 2021. doi:10.1002/ima.22476.
  • [9] Y. Liu, L. Wang, J. Cheng, C. Li, and X. Chen. Multi-focus image fusion: A survey of the state of the art. Information Fusion, 64:71-91, 2020. doi:10.1016/j.inffus.2020.06.013.
  • [10] Y. F. Lu and T. Zhang. Image quality assessment method via Riesz-transform based structural similarity. Chinese Journal of Liquid Crystals and Displays, 30(6):992-999, 2015. doi:10.3788/YJYXS20153006.0992.
  • [11] S. Moritoh and N. Takemoto. Expressing Hilbert and Riesz transforms in terms of wavelet transforms. Integral Transforms and Special Functions, 34(5):365-370, 2023. doi:10.1080/10652469.2022.2126465.
  • [12] C. Panigrahy, A. Seal, and N. K. Mahato. MRI and SPECT image fusion using a weighted parameter adaptive dual channel PCNN. IEEE Signal Processing Letters, 27:690-694, 2020. doi:10.1109/LSP.2020.2989054.
  • [13] C. Panigrahy, A. Seal, and N. K. Mahato. Parameter adaptive unit-linking dual-channel PCNN based infrared and visible image fusion. Neurocomputing, 514:21-38, 2022. doi:10.1016/j.neucom.2022.09.157.
  • [14] A. I. Rahmani, M. Almasi, N. Saleh, and M. Katouli. Image fusion of noisy images based on simultaneous empirical wavelet transform. Traitement du Signal, 37(5):703-710, 2020. doi:10.18280/ts.370502.
  • [15] S. Savić. Multifocus Image Fusion. https://dsp.etfbl.net/mif/. Chair of General Electrical Engineering, Faculty of Electrical Engineering, University of Banja Luka, Republic of Srpska.
  • [16] S. Savić and Z. Babić. Multifocus image fusion based on the first level of empirical mode decomposition. In Proc. 19th Int. Conf. Systems, Signals and Image Processing IWSSIP 2012, pages 604-607, Vienna, Austria, 11-13 Apr 2012. IEEE. https://ieeexplore.ieee.org/abstract/document/6208315.
  • [17] S. Singh, N. Mittal, and H. Singh. A feature level image fusion for IR and visible image using mNMRA based segmentation. Neural Computing and Applications, 34(10):8137-8145, 2022. doi:10.1007/s00521-022-06900-7.
  • [18] J. Sun, Q. Han, K. Liang, L. Zhang, and Z. Jin. Multi-focus image fusion algorithm based on Laplacian pyramids. Journal of the Optical Society of America A, 35(3):480-490, 2018. doi:10.1364/JOSAA.35.000480.
  • [19] L. F. Tang, H. Zhang, H. Xu, and J. Y. Ma. Deep learning-based image fusion: A survey. Journal of Image and Graphics, 28(1):3-36, 2023. doi:10.11834/jig.220422.
  • [20] The MathWorks, Inc. Matlab. Natick, MA, USA. [Accessed: 2018]. https://www.mathworks.com.
  • [21] X. Tian, W. Zhang, Y. Chen, Z. Wang, and J. Ma. Hyperfusion: A computational approach for hyperspectral, multispectral, and panchromatic image fusion. IEEE Transactions on Geoscience and Remote Sensing, 60:5518216, 2021. doi:10.1109/TGRS.2021.3128279.
  • [22] Y. Xing, S. Yang, Z. Feng, and L. Jiao. Dual-collaborative fusion model for multispectral and panchromatic image fusion. IEEE Transactions on Geoscience and Remote Sensing, 60:5400215, 2020. doi:10.1109/TGRS.2020.3036625.
  • [23] J. J. Yang. Improved gradient image fusion algorithm based on NSCT and PCNN. Digital Technology and Application, (11):124-127, 2015. doi:10.19695/j.cnki.cn12-1369.2015.11.092.
Uwagi
Opracowanie rekordu ze środków MEiN, umowa nr SONP/SP/546092/2022 w ramach programu "Społeczna odpowiedzialność nauki" - moduł: Popularyzacja nauki i promocja sportu (2022-2023).
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-56638b67-b5f0-4030-9b1a-caf1f6299385
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.