PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Tytuł artykułu

Infrared and visible image fusion with deep wavelet-dense network

Treść / Zawartość
Identyfikatory
Warianty tytułu
Języki publikacji
EN
Abstrakty
EN
We propose a high-quality infrared and visible image fusion method based on a deep wavelet-dense network (WT-DenseNet). The WT-DenseNet includes three network layers, the hybrid feature extraction layer, fusion layer, and image reconstruction layer. The hybrid feature extraction layer is composed of a wavelet and dense network. The wavelet network decomposes the feature map of the visible and infrared images into low-frequency and high-frequency components, respectively. The dense network extracts the salient features. A fusion layer is designed to integrate low-frequency and salient features. Finally, the fusion images are outputted by an image reconstruction layer. The experimental results demonstrate that the proposed method can realize high-quality infrared and visible image fusions, and the performance of the proposed method is better than that of the six recently published fusion methods in terms of contrast and detail performance.
Czasopismo
Rocznik
Strony
49--64
Opis fizyczny
Bibliogr. 27 poz., rys., tab.
Twórcy
autor
  • Guangdong Provincial Key Laboratory of Cyber-Physical System, School of Automation, Guangdong University of Technology, Guangzhou 510006, China
  • College of Optical Sciences, University of Arizona, Tucson, AZ 85721, USA
  • School of Computer, Guangdong University of Technology, Guangzhou 510006, China
  • Guangdong Provincial Key Laboratory of Cyber-Physical System, School of Automation, Guangdong University of Technology, Guangzhou 510006, China
  • College of Optical Sciences, University of Arizona, Tucson, AZ 85721, USA
  • School of Computer, Guangdong University of Technology, Guangzhou 510006, China
autor
  • Guangdong Provincial Key Laboratory of Cyber-Physical System, School of Automation, Guangdong University of Technology, Guangzhou 510006, China
  • College of Optical Sciences, University of Arizona, Tucson, AZ 85721, USA
  • School of Computer, Guangdong University of Technology, Guangzhou 510006, China
autor
  • Guangdong Provincial Key Laboratory of Cyber-Physical System, School of Automation, Guangdong University of Technology, Guangzhou 510006, China
  • College of Optical Sciences, University of Arizona, Tucson, AZ 85721, USA
  • School of Computer, Guangdong University of Technology, Guangzhou 510006, China
autor
  • Guangdong Provincial Key Laboratory of Cyber-Physical System, School of Automation, Guangdong University of Technology, Guangzhou 510006, China
  • College of Optical Sciences, University of Arizona, Tucson, AZ 85721, USA
  • School of Computer, Guangdong University of Technology, Guangzhou 510006, China
Bibliografia
  • [1] FENG Y.F., LU H.Q., BAI J.B., CAO L., YIN H., Fully convolutional network-based infrared and visible image fusion, Multimedia Tools and Applications 79, 2020: 15001–15014, DOI: 10.1007/s11042-019-08579-w.
  • [2] LIU Y., DONG L., JI Y., XU W., Infrared and visible image fusion through details preservation, Sensors 19(20), 2019: 4556, DOI: 10.3390/s19204556.
  • [3] MA J., MA Y., LI C., Infrared and visible image fusion methods and applications: A survey, Information Fusion 45, 2019: 153–178, DOI: 10.1016/j.inffus.2018.02.004.
  • [4] MA J., ZHANG H., SHAO Z., LIANG P., XU H., GANMcC: A generative adversarial network with multiclassification constraints for infrared and visible image fusion, IEEE Transactions on Instrumentation and Measurement 70, 2020: 5005014, DOI: 10.1109/TIM.2020.3038013.
  • [5] LI H., QI X.B., XIE W.Y., Fast infrared and visible image fusion with structural decomposition, Knowledge-Based Systems 204, 2020: 106182, DOI: 10.1016/j.knosys.2020.106182.
  • [6] ZHANG H., MA J., SDNet: A versatile squeeze-and-decomposition network for real-time image fusion, International Journal of Computer Vision 129, 2021: 2761–2785, DOI: 10.1007/s11263-021-01501-8.
  • [7] ZHANG Q., LIU Y., BLUM R.S., HAN J.G., TAO D.C., Sparse representation based multi-sensor image fusion for multi-focus and multi-modality images: A review, Information Fusion 40, 2018: 57–75, DOI: 10.1016/j.inffus.2017.05.006.
  • [8] ZHANG H., XU H., TIAN X., JIANG J.J., MA J.Y., Image fusion meets deep learning: A survey and perspective, Information Fusion 76, 2021: 323–336, DOI: 10.1016/j.inffus.2021.06.008.
  • [9] ZHU P., DING L., MA X.Q., HUANG Z.H., Fusion of infrared polarization and intensity images based on improved toggle operator, Optics & Laser Technology 98, 2018: 139–151, DOI: 10.1016/j.optlastec.2017.07.054.
  • [10] MA J.Y., XU H., JIANG J.J., MEI X.G., ZHANG X.P., DDcGAN: A dual-discriminator conditional generative adversarial network for multi-resolution image fusion, IEEE Transactions on Image Processing 29, 2020: 4980–4995, DOI: 10.1109/TIP.2020.2977573.
  • [11] LIU Y., CHEN X., CHENG J., PENG H., WANG Z.F., Infrared and visible image fusion with convolutional neural networks, International Journal of Wavelets, Multiresolution and Information Processing 16(3), 2018: 1850018, DOI: 10.1142/S0219691318500182.
  • [12] LI H., WU X.J., DenseFuse: A fusion approach to infrared and visible images, IEEE Transactions on Image Processing 28(5), 2019: 2614–2623, DOI: 10.1109/TIP.2018.2887342.
  • [13] GARCIA F., MIRBACH B., OTTERSTEN B., GRANDIDIER F., CUESTA A., Pixel weighted average strategy for depth sensor data fusion, 2010 IEEE International Conference on Image Processing, 2010: 2805–2808, DOI: 10.1109/ICIP.2010.5651112.
  • [14] ZITOVA B., FLUSSER J., Image registration methods: A survey, Image and Vision Computing 21(11), 2003: 977–1000, DOI: 10.1016/S0262-8856(03)00137-9.
  • [15] CUN X.D., PUN C.M., GAO H., Applying stochastic second-order entropy images to multi-modal image registration, Signal Processing: Image Communication 65, 2018: 201–209, DOI: 10.1016/j.image.2018.03.021.
  • [16] LIN C.C., SHEU M.H., CHIANG H.K., LIAW C., WU Z.C., The efficient VLSI design of BI-CUBIC convolution interpolation for digital image processing,2008 IEEE International Symposium on Circuits and Systems (ISCAS), 2008: 480, DOI: 10.1109/ISCAS.2008.4541459.
  • [17] BUADES A., COLL B., MOREL J.M., Non-local means denoising, Image Processing On Line 1, 2011: 208–212, DOI: 10.5201/ipol.2011.bcm_nlm.
  • [18] HUANG G., LIU Z., VAN DER MAATEN L., WEINBERGER K.Q., Densely connected convolutional networks, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017: 2261–2269, DOI: 10.1109/CVPR.2017.243.
  • [19] MALLAT G.S., A theory for multiresolution signal decomposition: the wavelet representation, IEEE Transactions on Pattern Analysis and Machine Intelligence 11(7), 1989: 674–693, DOI: 10.1109/34.192463.
  • [20] LIN T.Y., MAIRE M., BELONGIE S., HAYS J., PERONA P., RAMANAN D., DOLLAR P., ZITNICK C.L., Microsoft COCO: common objects in context, [In] Computer Vision – ECCV 2014, D. Fleet, T. Pajdla, B. Schiele, T. Tuytelaars [Eds.], ECCV 2014, Lecture Notes in Computer Science, Vol. 8693, Springer, Cham, 2014: 740–755, DOI: 10.1007/978-3-319-10602-1_48.
  • [21] TOET A., TNO Image Fusion Dataset, 2014.
  • [22] DU Q., XU H., MA Y., HUANG J., FAN F., Fusing infrared and visible images of different resolutions via total variation model, Sensors 18(11), 2018: 3827, DOI: 10.3390/s18113827.
  • [23] ZHANG Y., ZHANG L.J., BAI X.Z., ZHANG L., Infrared and visual image fusion through infrared feature extraction and visual information preservation, Infrared Physics & Technology 83, 2017: 227–237, DOI: 10.1016/j.infrared.2017.05.007.
  • [24] MA J.L., ZHOU Z.Q., WANG B., ZONG H., Infrared and visible image fusion based on visual saliency map and weighted least square optimization, Infrared Physics & Technology 82, 2017: 8–17, DOI: 10.1016/j.infrared.2017.02.005.
  • [25] XU H., MA J.Y., JIANG J.J., GUO X.J., LING H.B., U2Fusion: A unified unsupervised image fusion network, IEEE Transactions on Pattern Analysis and Machine Intelligence 44(1), 2022: 502–518, DOI: 10.1109/TPAMI.2020.3012548.
  • [26] HAGHIGHAT M., RAZIAN M.A., Fast-FMI: Non-reference image fusion metric, 2014 IEEE 8th International Conference on Application of Information and Communication Technologies (AICT), Astana, Kazakhstan, Oct. 2014, DOI: 10.1109/ICAICT.2014.7036000.
  • [27] DING M., YAO Y., LI W., CAO Y., Visual tracking using locality-constrained linear coding and saliency map for visible light and infrared image sequences, Signal Processing: Image Communication 68, 2018: 13–25, DOI: 10.1016/j.image.2018.06.019.
Uwagi
Opracowanie rekordu ze środków MEiN, umowa nr SONP/SP/546092/2022 w ramach programu "Społeczna odpowiedzialność nauki" - moduł: Popularyzacja nauki i promocja sportu (2022-2023).
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-e68b8681-10d9-4bdd-bcfa-086133d6e1e8
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.