PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Tytuł artykułu

Deep networks for image super-resolution using hierarchical features

Treść / Zawartość
Identyfikatory
Warianty tytułu
Języki publikacji
EN
Abstrakty
EN
To better extract feature maps from low-resolution (LR) images and recover high-frequency information in the high-resolution (HR) images in image super-resolution (SR), we propose in this paper a new SR algorithm based on a deep convolutional neural network (CNN). The network structure is composed of the feature extraction part and the reconstruction part. The extraction network extracts the feature maps of LR images and uses the sub-pixel convolutional neural network as the up-sampling operator. Skip connection, densely connected neural networks and feature map fusion are used to extract information from hierarchical feature maps at the end of the network, which can effectively reduce the dimension of the feature maps. In the reconstruction network, we add a 3×3 convolution layer based on the original sub-pixel convolution layer, which can allow the reconstruction network to have better nonlinear mapping ability. The experiments show that the algorithm results in a significant improvement in PSNR, SSIM, and human visual effects as compared with some state-of-the-art algorithms based on deep learning.
Rocznik
Strony
art. no. e139616
Opis fizyczny
Bibliogr. 31 poz., rys., tab.
Twórcy
autor
  • College of Automation Engineering, Nanjing University of Aeronautics and Astronautics, 210016 Nanjing, Jiangsu, China
autor
  • College of Automation Engineering, Nanjing University of Aeronautics and Astronautics, 210016 Nanjing, Jiangsu, China
autor
  • College of Automation Engineering, Nanjing University of Aeronautics and Astronautics, 210016 Nanjing, Jiangsu, China
Bibliografia
  • [1] T. Karras, T. Aila, and S. Laine, “Progressive growing of gans for improved quality, stability, and variation”, arXiv preprint arXiv:1710.10196, 2017.
  • [2] W. Shi, J. Caballero, and C. Ledig, “Cardiac image super-resolution with global correspondence using multi-atlas patch-match”, in International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI), 2013, pp. 9–16.
  • [3] X. Yang, D. Liu, and D. Zhou, “Super-resolution reconstruction of face images based on pre-amplification non-negative restricted neighborhood embedding”, Bull. Pol. Acad. Sci. Tech. Sci., vol. 66, no. 6, pp. 899–905, 2018.
  • [4] C. Dong, C.C. Loy, and K. He, “Learning a deep convolutional network for image super-resolution”, in European conference on computer vision (ECCV), 2014, pp. 184–199.
  • [5] J. Kim, J.K. Lee, and K.M. Lee, “Accurate image super-resolution using very deep convolutional networks”, in Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), 2016, pp. 1646–1654.
  • [6] M. Haris, G. Shakhnarovich, and N. Ukita, “Deep back-projection networks for single image super-resolution”, arXiv preprint arXiv: 1904.05677, 2019.
  • [7] J. Kim, J.K. Lee, and K.M. Lee, “Deeply-recursive convolutional network for image super-resolution”, in Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), 2016, pp. 1637–1645.
  • [8] Y. Tai, J. Yang, and X. Liu, “Memnet: A persistent memory network for image restoration”, in Proceedings of the IEEE international conference on computer vision (ICCV), 2017, pp. 4539–4547.
  • [9] Y. Zhang, Y. Tian, and Y. Kong, “Residual dense network for image super-resolution”, in Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), 2018, pp. 2472–2481.
  • [10] J. Yamanaka, S. Kuwashima, and T. Kurita, “Fast and accurate image super resolution by deep CNN with skip connection and network in network”, in International Conference on Neural Information Processing (ICONIP), 2017, pp. 217–225.
  • [11] L. Chen, Q. Kou, and D. Cheng, “Content-guided deep residual network for single image super-resolution”, Optik, vol. 202, pp. 163678, 2020.
  • [12] B. Lim, S. Son, and H. Kim, “Enhanced deep residual networks for single image super-resolution”, in Proceedings of the IEEE conference on computer vision and pattern recognition workshops, 2017, pp. 136–144.
  • [13] X. Yang, X. Li, and Z. Li, “Image super-resolution based on deep neural network of multiple attention mechanism”, J. Vis. Commun. Image Represent., vol. 75, p. 103019, 2021.
  • [14] L. Chen, L. Guo, and D. Cheng, “A lightweight network with bidirectional constraints for single image super-resolution”, Optik, vol. 239, p. 166818, 2021.
  • [15] X. Yang, Y. Guo, and Z. Li, “Image super-resolution network based on a multi-branch attention mechanism”, Signal Image Video Process., vol. 15, pp. 1–9, 2021.
  • [16] X.J. Mao, C. Shen, and Y.B. Yang, “Image restoration using convolutional auto-encoders with symmetric skip connections”, arXiv preprint arXiv:1606.08921, 2016.
  • [17] C. Ledig, L. Theis, and F. Huszar, “Photo-realistic single image super-resolution using a generative adversarial network”, in Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), 2017, pp. 4681–4690.
  • [18] G. Huang, Z. Liu, L. Van Der Maaten, and K.Q. Weinberger, “Densely connected convolutional networks”, in Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), 2017, pp. 4700–4708.
  • [19] W. Shi, J. Caballero, and L. Theis, “Is the deconvolution layer the same as a convolutional layer?”, arXiv preprint arXiv: 1609.07009, 2016.
  • [20] W. Shi, J. Caballero, and F. Huszár, “Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network”, in Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), 2016, pp. 1874–1883.
  • [21] D. Martin, C. Fowlkes, and D. Tal, “A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics”, in Proceedings Eighth IEEE International Conference on Computer Vision. ICCV 2001, 2001, pp. 416–423.
  • [22] M. Bevilacqua, A. Roumy, and C. Guillemot, “Low-complexity single-image super-resolution based on nonnegative neighbour embedding”, in Proceedings of the 23rd British Machine Vision Conference (BMVC), 2012, pp. 135.1–135.10.
  • [23] R. Zeyde, M. Elad, and M. Protter, “On single image scale-up using sparse-representations”, in International Conference on Curves and Surfaces, 2010, pp. 711–730.
  • [24] J.B. Huang, A. Singh, and N. Ahuja, “Single image super-resolution from transformed self-exemplars”, in Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), 2015, pp. 5197–5206.
  • [25] R. Timofte, E. Agustsson, and L.V. Gool, “Ntire 2017 challenge on single image super-resolution: Methods and results”, in Proceedings of the IEEE conference on computer vision and pattern recognition workshops, 2017, pp. 114–125.
  • [26] R. Timofte, V.D. Smet, and L.V. Gool, “A+: Adjusted anchored neighborhood regression for fast super-resolution”, in Asian conference on computer vision (ACCV), 2014, pp. 111–126.
  • [27] J.B. Huang, A. Singh, and N. Ahuja, “Single image super-resolution from transformed self-exemplars”, in Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), 2015, pp. 5197–5206.
  • [28] W.S. Lai, J.B. Huang, and N. Ahuja, “Deep laplacian pyramid networks for fast and accurate super-resolution”, in Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), 2017, pp. 624–632.
  • [29] C. Wang, Z. Li, and J. Shi, “Lightweight image super-resolution with adaptive weighted learning network”, arXiv preprint arXiv:1904.02358, 2019.
  • [30] Z. Hui, X. Wang, and X. Gao, “Two-stage convolutional network for image super-resolution”, in 2018 24th International Conference on Pattern Recognition (ICPR), 2018, pp. 2670–2675.
  • [31] Y. Matsui et al., “Sketch-based manga retrieval using manga109 dataset”, Multimed Tools Appl, vol. 76, no. 20, pp. 21811–21838, 2017.
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-d1e259cc-9633-4f5f-8120-967f93b48600
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.