Identyfikatory
Warianty tytułu
MSCNN: Wielosensorowe stapianie obrazu oparte na technice podwójnego kanału CNN w obszarze określoności transformacji
Języki publikacji
Abstrakty
This paper describes an image fusion approach based on CNNs and DWT. According to the suggested method, First Each inputted image is decomposed into approximation coefficients and detail coefficients using DWT. The second step is to maximize the weights using CNN with detailed coefficients. Third, using maximum weight and max pooling, the combined detail images are produced. Fourth, an average pooling of the approximate coefficients is used to determine the final approximation coefficients. Lastly, Inverse DWT is then used to combine the detail and final approximation images to produce the final fused image. Experiments are carried out on four different fusion datasets. Different Quality checking metrics are used to analyze the data, and the results are then contrasted with more recent and usual fusion techniques. The result substantiates that the suggested technique performs better than the existing fusion methods. It is also appropriate for real-time applications due to the proposed method's reasonable computational time and simple yet efficient implementation.
Artykuł dotyczy wielosensorowych konwolucyjnych sieci neuronowych (MS CNN) do fuzji obrazów w oparciu o konwolucyjne sieci neuronowe (CNN) i dyskretną transformację falkową (DWT). Zgodnie z sugerowaną metodą, najpierw każdy wprowadzony obraz jest rozkładany na współczynniki aproksymacji i współczynniki szczegółowości przy użyciu DWT. Drugim krokiem jest maksymalizacja wag za pomocą CNN ze szczegółowymi współczynnikami. W trzecim etapie, przy użyciu maksymalnej wagi i maksymalnego łączenia, tworzone są połączone szczegółowe obrazy. W czwartym etapie stosuje się średnią sumę przybliżonych współczynników w celu określenia ostatecznych współczynników przybliżenia. Na koniec stosuje się odwrotną DWT do łączenia obrazów szczegółowych i końcowych przybliżeń w celu uzyskania ostatecznego połączonego obrazu. Eksperymenty przeprowadzane są na czterech różnych zbiorach danych. Do analizy danych wykorzystuje się różne wskaźniki kontroli jakości, a następnie wyniki porównuje się z nowszymi i typowymi technikami łączenia. Wynik potwierdza, że sugerowana technika działa lepiej niż istniejące metody aglutynacji. Nadaje się również do zastosowań w czasie rzeczywistym ze względu na rozsądny czas obliczeń proponowanej metody oraz prostą, ale efektywną implementację.
Wydawca
Czasopismo
Rocznik
Tom
Strony
165--182
Opis fizyczny
Bibliogr. 33 poz.., rys., wykr.
Twórcy
autor
- The Charutar Vidya Mandal University Department of Applied Science & Humanities G H Patel College of Engineering & Technology Vallabh Vidhyanagar-388120, India
autor
- The Charutar Vidya Mandal University Department of Applied Science & Humanities G H Patel College of Engineering & Technology Vallabh Vidhyanagar-388120, India
Bibliografia
- [1] F.-P. An, X.-M. Ma, and L. Bai. Image fusion algorithm based on unsupervised deep learning-optimized sparse representation. Biomedical Signal Processing and Control, 71:103140, 2022.
- [2] P. I. Basheer, K. P. Prasad, A. D. Gupta, B. Pant, V. P. Vijavan, and D. Kapila. Optimal Fusion Technique for Multi-Scale Remote Sensing Images Based on DWT and CNN. In 8th International Conference on Smart Structures and Systems (ICSSS), 2022, pages 1–6, 2022.
- [3] D. P. Bavirisetti, G. Xiao, and G. Liu. Multi-sensor image fusion based on fourth order partial differential equations. 2017 20th International Conference on Information Fusion (Fusion), pages 1–9, 2017.
- [4] S. Budhiraja, S. Agrawal, and B. S. Sohi. Performance analysis of multiscale transforms for saliency-based infrared and visible image fusion. In M. Saraswat, S. Roy, C. Chowdhury, and A. H. Gandomi, editors, Proceedings of International Conference on Data Science and Applications 2021, pages 801–809, Singapore, 2022. Springer Singapore.
- [5] Y. Chen,K. Shi, Y. Ge, and Ya’nan Zhou. Spatiotemporal Remote Sensing Image Fusion Using Multiscale Two-Stream Convolutional Neural Networks. IEEE Transactions on Geoscience and Remote Sensing, 60:1–12, 2022.
- [6] R. Gonzalez, R. Woods, and S. Eddins. Digital image processing using MATLAB. McGraw Hill Education, second edition, 2017.
- [7] X. Huo, Y. Deng, and K. Shao. Infrared and visible image fusion with significant target enhancement. Entropy, 24(11), 2022.
- [8] C.-G. Im, D.-M. Son, H.-J. Kwon, and S.-H. Lee. Tone image classification and weighted learning for visible and NIR image fusion. Entropy, 24 (10), 2022.
- [9] H. Li, X.-J. Wu, and J. Kittler. Infrared and visible image fusion using a deep learning framework. In 24th International Conference on Pattern Recognition (ICPR), 2018, pages 2705–2710, 2018.
- [10] Y. Liu, Z. Wu, X. Han, Q. Sun, J. Zhao, and J. Liu. Infrared and visible image fusion based on visual saliency map and weighted least square optimization. Infrared Physics and Technology, pages 8–17, 2017.
- [11] Y. Liu, L. Wang, H. Li, and X. Chen. Multi-focus image fusion with deep residual learning and focus property detection. Information Fusion, 86-87:1–16, 2022.
- [12] Y. Luo, Q. Zeng, Y. Li, W. Qiu, and Y. Zhang. Feature Matching technology of Visible and Infrared Fusion Image based on VIFB. Journal of Physics: Conference Series, 2284(1):012011, jun 2022.
- [13] N. Ma, Z. Wu, Y. M. Cheung, G. Yuchen, Y. Gao, J. Li, and B. Jiang. A survey of human action recognition and posture prediction. Tsinghua Science and Technology, 27(6):973–1001, 2022.
- [14] S. Nirmalraj and G. Nagarajan. Fusion of visible and infrared image via compressive sensing using convolutional sparse representation. ICT Express, 7(3):350–354, 2021.
- [15] V. Rajinikanth, S. C. Satapathy, N. Dey, and R. Vijayarajan. DWT-PCA image fusion technique to improve segmentation accuracy in brain tumor analysis. In J. Anguera, S. C. Satapathy, V. Bhateja, and K. Sunitha, editors, Microelectronics, Electromagnetics and Telecommunications, pages 453–462, Singapore, 2018. Springer Singapore.
- [16] I. Shopovska and L. Jovanov, and P. Wilfried. Deep visible and thermal image fusion for enhanced pedestrian visibility. Sensors, 19(17), 2019.
- [17] B.K. Shreyamsha Kumar Image fusion based on pixel significance using a cross bilateral filter. Signal, Image and Video Processing, 9:1193–1204, 2015.
- [18] S. Singh, H. Singh, N. Mittal, H. Singh, A. G. Hussien, and F. Sroubek. A feature level image fusion for Night-Vision context enhancement using Arithmetic optimization algorithm based image segmentation. Expert Systems with Applications, 209:ID: 118272, 2022.
- [19] S. Singh, H. Singh, A. Gehlot, J. Kaur, and Gagandeep. IR and visible image fusion using DWT and bilateral filter. Microsystem Technologies, 29:457–467, 2023.
- [20] L. Tang, J. Yuan, H. Zhang, X. Jiang, and J. Ma. PIAFusion: A progressive infrared and visible image fusion network based on illumination aware. Information Fusion, 83-84:79–92, 2022.
- [21] W. Tang, F. He, Y. Liu, and Y. Duan. MATR: Multimodal medical image fusion via multiscale adaptive transformer. IEEE Transactions on Image Processing, 31:5134–5149, 2022.
- [22] A. Toet. The TNO multiband image data collection. Data in Brief, 15: 249–251, 2017.
- [23] G. Trivedi and R. Sanghvi. Medical Image Fusion Using CNN with Automated Pooling. Indian Journal Of Science And Technology, 15(42): 2267–2274, nov 2022.
- [24] G. Trivedi and R. Sanghvi. Hybrid Model for Infrared and Visible Image Fusion. Annals of the Faculty of Engineering Hunedoara, 21(3):167–173, aug 2023.
- [25] G. Trivedi and R. Sanghvi. Optimizing image fusion using modified principal component analysis algorithm and adaptive weighting scheme. International Journal of Advanced Networking and Applications, 15(01): 5769–5774, 2023.
- [26] G. Trivedi and R. Sanghvi. Novel approach to multi-modal image fusion using modified convolutional layers. Journal of Innovative Image Processing, 5(3):229, Sep 2023.
- [27] G. Trivedi and R. Sanghvi. Fusesharp: A multi-image focus fusion method using discrete wavelet transform and unsharp masking. J. Appl. Math. and Informatics, 41(5):1115 – 1128, Sep 2023.
- [28] G. Trivedi and R. Sanghvi. MOSAICFUSION: Merging modalities with Partial differential equation and Discrete cosine transformation. J. Appl. & Pure Math., 5(5–6):389 – 406, nov 2023.
- [29] G. Trivedi and R. Sanghvi. Automated multimodal fusion with PDE preprocessing and learnable convolutional pools. ADBU-Journal of Engineering Technology, 13(1):0130104066, Jan 2024.
- [30] G. Trivedi, J. S. Vishant Shah, and R. Sanghvi. On solution of noninstantaneous impulsive Hilfer fractional integro-differential evolution system. Mathematica Applicanda, 51(1):3–20, jun 2023.
- [31] T. Xu. Multisensor Concealed Weapon Detection Using the Image Fusion Approach. PhD thesis, University of Windsor, 2016. URL https://scholar.uwindsor.ca/etd/5773.
- [32] X. Zhang, P. Ye, and X. Gang. VIFB: A Visible and Infrared Image Fusion Benchmark. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pages 468–478, 2020.
- [33] M. Zhou, J. Huang, X. Fu, F. Zhao, and D. Hong. Effective pansharpening by multiscale invertible neural network and heterogeneous task distilling. IEEE Transactions on Geoscience and Remote Sensing, 60:1–14, 2022.
Uwagi
PL
Opracowanie rekordu ze środków MNiSW, umowa nr SONP/SP/546092/2022 w ramach programu "Społeczna odpowiedzialność nauki" - moduł: Popularyzacja nauki i promocja sportu (2024).
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-cac9b26d-d0d3-4b0d-975a-c0010fbe2529