Tytuł artykułu
Treść / Zawartość
Pełne teksty:
Identyfikatory
Warianty tytułu
Języki publikacji
Abstrakty
Traditional light field all-in-focus image fusion algorithms are based on the digital refocusing technique. Multi-focused images converted from one single light field image are used to calculate the all-in-focus image and the light field spatial information is used to accomplish the sharpness evaluation. Analyzing the 4D light field from another perspective, an all-in-focus image fusion algorithm based on angular information is presented in this paper. In the proposed method, the 4D light field data are fused directly and a macro-pixel energy difference function based on angular information is established to accomplish the sharpness evaluation. Then the fused 4D data is guided by the dimension increased central sub-aperture image to obtain the refined 4D data. Finally, the all-in-focus image is calculated by integrating the refined 4D light field data. Experimental results show that the fused images calculated by the proposed method have higher visual quality. Quantitative evaluation results also demonstrate the performance of the proposed algorithm. With the light field angular information, the image feature-based index and human perception inspired index of the fused image are improved.
Czasopismo
Rocznik
Tom
Strony
289--304
Opis fizyczny
Bibliogr. 23 poz., rys., tab.
Twórcy
autor
- School of Electronic Information Engineering, Taiyuan University of Science and Technology, No. 66 Waliu Road, Taiyuan 030024, China
autor
- School of Electronic Information Engineering, Taiyuan University of Science and Technology, No. 66 Waliu Road, Taiyuan 030024, China
autor
- School of Engineering Science, Simon Fraser University, 8888 University Drive, Burnaby, BC, V5A 1S6, Canada
autor
- School of Electronic Information Engineering, Taiyuan University of Science and Technology, No. 66 Waliu Road, Taiyuan 030024, China
autor
- School of Electronic Information Engineering, Taiyuan University of Science and Technology, No. 66 Waliu Road, Taiyuan 030024, China
Bibliografia
- [1] DE I., CHANDA B., Multi-focus image fusion using a morphology-based focus measure in a quad-tree structure, Information Fusion 14(2), 2013, pp. 136–146, DOI:10.1016/j.inffus.2012.01.007.
- [2] ZHANG Y., BAI X.Z., WANG T., Boundary finding based multi-focus image fusion through multi-scale morphological focus-measure, Information Fusion 35, 2017, pp. 81–101, DOI:10.1016/j.inffus.2016.09.006.
- [3] WANG D., LIU C., SHEN C., XING Y., WANG Q.H., Holographic capture and projection system of real object based on tunable zoom lens, PhotoniX 1(1), 2020, article 6, DOI:10.1186/s43074-020-0004-3.
- [4] NG R., LEVOY M., BREDIF M., DUVAL G., HOROWITZ M., HANRAHAN P., Light field photography with a hand-held plenoptic camera, Stanford University Computer Science Technical Report CSTR2005-02, 2005, pp. 1–11.
- [5] NG R., Fourier slice photography, ACM Transactions on Graphics 24(3), 2005, pp. 735–744, DOI:10.1145/1073204.1073256.
- [6] XIE Y.X., WU Y.C., WANG Y.M., ZHAO X.Y., WANG A.H., Light field all-in-focus image fusion based on wavelet domain sharpness evaluation, Journal of Beijing University of Aeronautics and Astronautics 45(9), 2019, pp. 1848–1854 (in Chinese), DOI:10.13700/j.bh.1001-5965.2018.0739.
- [7] XIAO B., OU G., TANG H., BI X.L., LI W.S., Multi-focus image fusion by hessian matrix based decomposition, IEEE Transactions on Multimedia 22(2), 2020, pp. 285–297, DOI:10.1109/TMM.2019.2928516.
- [8] LI H., WU X.J., DenseFuse: a fusion approach to infrared and visible images, IEEE Transactions on Image Processing 28(5), 2019, pp. 2614–2623, DOI:10.1109/TIP.2018.2887342.
- [9] ZHANG Y.Q., WU J.X., LI H., Multi-focus image fusion based on similarity characteristics, Signal Processing 92, 2019, pp. 1268–1280.
- [10] LIU Y., WANG L., CHENG J., LI C., CHEN X., Multi-focus image fusion: a survey of the state of the art, Information Fusion 64, 2020, pp. 71–91, DOI:10.1016/j.inffus.2020.06.013.
- [11] SUN J., HAN Q., KOU L., ZHANG L., ZHANG K., JIN Z., Multi-focus image fusion algorithm based on Laplacian pyramids, Journal of the Optical Society of America A 35(3), 2018, pp. 480–490, DOI:10.1364/JOSAA.35.000480.
- [12] ZOU J.B., SUN W., Multi-focus image fusion based on lifting stationary wavelet transform and joint structural group sparse representation, Journal of Computer Applications 38(3), 2018, pp. 859–865, DOI:10.11772/j.issn.1001-9081.2017081970.
- [13] LI J., SONG M.H., PENG Y.X., Infrared and visible image fusion based on robust principal component analysis and compressed sensing, Infrared Physics and Technology 89, 2018, pp. 129–139, DOI:10.1016/j.infrared.2018.01.003.
- [14] LI S.T., KANG X.D., HU J.W., Image fusion with guided filtering, IEEE Transactions on Image Processing 22(7), 2013, pp. 2864–2875, DOI:10.1109/TIP.2013.2244222.
- [15] FARID M.S., MAHMOOD A., AL-MAADEED S.A., Multi-focus image fusion using Content Adaptive Blurring, Information Fusion 45, 2019, pp. 96–112, DOI:10.1016/j.inffus.2018.01.009.
- [16] QIN X.Q., ZHENG J.Y., HU G., WANG J., Multi-focus image fusion based on window empirical mode decomposition, Infrared Physics and Technology 85, 2017, pp. 251–260, DOI:10.1016/j.infrared.2017.07.009.
- [17] NG R., Digital Light Field Photography, PhD Thesis, Stanford University, 2006.
- [18] HE K., SUN J., TANG X., Guided image filtering, IEEE Transactions on Pattern Analysis and Machine Intelligence 35(6), 2013, pp. 1397–1409, DOI:10.1109/TPAMI.2012.213.
- [19] YOON Y., JEON H.G., YOO D., LEE J.Y., KWEON I.S., Light-field image super-resolution using convolutional neural network, IEEE Signal Processing Letters 24(6), 2017, pp. 848–852, DOI:10.1109/LSP.2017.2669333.
- [20] HAGHIGHAT M., RAZIAN M A., Fast-FMI: non-reference image fusion metric, 2014 IEEE 8th International Conference on Application of Information and Communication Technologies (AICT), 2014, pp. 1–3, DOI:10.1109/ICAICT.2014.7036000.
- [21] ZHAO J., LAGANIERE R., LIU Z., Performance assessment of combinative pixel-level image fusion based on absolute feature measurement, International Journal of Innovative Computing, Information & Control 3(6), 2007, pp. 1433–1447.
- [22] YANG C., ZHANG J.Q., WANG X.R., LIU X., A novel similarity based quality metric for image fusion, Information Fusion 9(2), 2008, pp. 156–160, DOI:10.1016/j.inffus.2006.09.001.
- [23] FEICHTENHOFER C., FASSOLD H., SCHALLAUER P., A perceptual image sharpness metric based on local edge gradient analysis, IEEE Signal Processing Letters 20(4), 2013, pp. 379–382, DOI:10.1109/LSP.2013.2248711.
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-ce533dc1-e441-4ada-bc88-0e609bbe3726