PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Tytuł artykułu

Single target tracking algorithm for lightweight Siamese networks based on global attention

Treść / Zawartość
Identyfikatory
Warianty tytułu
Języki publikacji
EN
Abstrakty
EN
Object tracking based on Siamese networks has achieved great success in recent years, but increasingly advanced trackers are also becoming cumbersome, which will severely limit deployment on resource-constrained devices. To solve the above problems, we designed a network with the same or higher tracking performance as other lightweight models based on the SiamFC lightweight tracking model. At the same time, for the problems that the SiamFC tracking network is poor in processing similar semantic information, deformation, illumination change, and scale change, we propose a global attention module and different scale training and testing strategies to solve them. To verify the effectiveness of the proposed algorithm, this paper has done comparative experiments on the ILSVRC, OTB100, VOT2018 datasets. The experimental results show that the method proposed in this paper can significantly improve the performance of the benchmark algorithm.
Rocznik
Strony
art. no. e139961
Opis fizyczny
Bibliogr. 40 poz., rys., tab.
Twórcy
autor
  • College of Mathematics and Computer Science, Zhejiang Normal University, Jinhua, Zhejiang, 321000, China
autor
  • College of Mathematics and Computer Science, Zhejiang Normal University, Jinhua, Zhejiang, 321000, China
autor
  • College of Mathematics and Computer Science, Zhejiang Normal University, Jinhua, Zhejiang, 321000, China
Bibliografia
  • [1] E. Kot, Z. Krawczyk, K. Siwek, L. Krolicki, and P. Czwarnowski, “Deep learning based framework for tumour detection and semantic segmentation,” Bull. Pol. Acad. Sci. Tech. Sci., p. e136750, 2021, doi: 10.24425/bpasts.2021.136750.
  • [2] A.M. Osowska-Kurczab, T. Markiewicz, M. Dziekiewicz and M. Lorent, “Combining texture analysis and deep learning in renal tumour classification task,” 2020 IEEE 21st International Conference on Computational Problems of Electrical Engineering (CPEE), 2020, pp. 1–4, doi: 10.1109/CPEE50798.2020.9238757.
  • [3] Z. Krawczyk and J. Starzy´nski, “Segmentation of bone structures with the use of deep learning techniques,” Bull. Pol. Acad. Sci. Tech. Sci., vol. 69, p. e136751, 2021, doi: 10.24425/bpasts.2021.136751.
  • [4] L. Bertinetto et al, “Fully-Convolutional Siamese Networks for Object Tracking,” Computer Vision – ECCV 2016 Workshops. ECCV 2016. Lecture Notes in Computer Science, vol. 9914, doi: 10.1007/978-3-319-48881-3_56.
  • [5] P. Li, B. Chen,W. Ouyang, D.Wang, X. Yang, and H. Lu, “Grad-Net: Gradient-Guided Network for Visual Object Tracking,” 2019 IEEE/CVF International Conference on Computer Vision (ICCV), 2019, pp. 6161–6170, doi: 10.1109/ICCV.2019.00626.
  • [6] Z. Zhu, et al, “Distractor-aware Siamese Networks for Visual Object Tracking,” Computer Vision – ECCV 2018. ECCV 2018. Lecture Notes in Computer Science, vol. 11213, pp. 103–119, 2018, doi: 10.1007/978-3-030-01240-3_7.
  • [7] B. Li, W. Wu, Q. Wang, F. Zhang, J. Xing, and J. Yan, “Siam RPN++: Evolution of Siamese Visual TrackingWith Very Deep Networks,” 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019, pp. 4277–4286, doi: 10.1109/CVPR.2019.00441.
  • [8] S. Han, H. Mao, and W.J. Dally, “Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding,” 4th International Conference on Learning Representations, ICLR 2016, 2016
  • [9] Y. Liu, X. Dong, W. Wang, and J. Shen, “Teacher-students knowledge distillation for siamese trackers,” Available: https://arxiv.org/abs/1907.10586.
  • [10] J. Valmadre, L. Bertinetto, J. Henriques, A. Vedaldi, and P.H.S. Torr, “End-to-End Representation Learning for Correlation Filter Based Tracking,” 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 5000–5008, doi: 10.1109/CVPR.2017.531.
  • [11] T. Wang, “DenseNet Siamese network target tracking with global context feature module,” J. Electron. Inf. Technol., vol. 43, no. 1, pp. 179–186, 2021, doi: 10.11999/JEIT190788.
  • [12] X. Yang et al., “An improved target tracking algorithm based on spatio-temporal context under occlusions,” Multidimension. Syst. Signal. Process., vol. 31, no. 2, pp. 329–344, 2020, doi: 10. 1007/s11045-019-00664-5.
  • [13] Y.F. Zhang et al, “A high robust real-time single-target ship tracking method based on siamese network,” Available: http://en.cnki.com.cn/Article_en/CJFDTotal-JCKX201923022.html.
  • [14] Z. Sha and H. Yuqing, “UAV designated target tracking based on Siamese area candidate network,” IEEE Comput. Appl. Power, vol. 41, no. 2, pp. 523–529, 2021.
  • [15] W. Xiang, Z.G. Yuxuan, Y. Qiqi, and L. Xiaomao, “Scaleadaptive sea surface target tracking algorithm based on deep learning,” J. Underwater Unmanned Syst., vol. 28, no. 6, pp. 618–625, 2020.
  • [16] Z. Xingchen et al., “DSiamMFT: An RGB-T fusion tracking method via dynamic Siamese networks using multi-layer feature fusion,” Signal Process. Image Commun., vol. 84, p. 115756, 2020, doi: 10.1016/j.image.2019.115756.
  • [17] G. Bhat et al., “Learning Discriminative Model Prediction for Tracking,” 2019 ICCV, pp. 6182–6191.
  • [18] Y. Fan and X. Song , “Siamese Progressive Attention-Guided Fusion Network for Object Tracking,” Comput-Aided Des. Comput. Graphics, vol. 33, no. 2, pp. 199–206, doi: 10.3724/SP.J.1089.2021.18392.
  • [19] L. Enhan, Z. Rui, Z. Shuo, and W. Ru, “An infrared pedestrian target tracking method based on video prediction,” J. Harbin Inst. Technol., vol. 52, no. 10, pp. 192–200, 2020.
  • [20] Ch. Lei, W. Yue, and T. Chunna, “A visual target tracking algorithm with residual attention mechanism,” J. Xidian Univ., vol. 47, no. 6, pp. 148–157+163, 2020.
  • [21] K. Jie, S. Yang, and S. Junge, “Siamese network target tracking based on difficult sample mining,” Comput. Appl. Res., vol. 38, no. 4, pp. 1216–1219+1223, 2021.
  • [22] C. Xi, H. Yifeng, Y. Yunfeng, Q. Donglian, and S. Jianxin, “Target intelligent tracking and segmentation fusion algorithm and its application in substation video surveillance,” 2020 Electr. Eng. Conf., vol. 40, no. 23, pp. 7578–7587, 2020.
  • [23] W. Guishan, L. Shubin, Z. Jianghua, and Y. Wenyuan, “Siamese network target tracking based on regional loss function,” Int. J. Syst., vol. 15, no. 4, pp. 722–731, 2020.
  • [24] C. Zhiwang Z. Zhongxin, S. Juan, L. Hongfu, and P. Yong, “Siamese network tracking algorithm based on target attention feature selection,” Acta Optics, vol. 40, no. 9, pp. 110–126, 2020.
  • [25] P. Lei, F. Xinxi, H. Zhiqiang, Y. Wangsheng, and M. Sugang, “Siamese network visual tracking algorithm based on cascaded attention mechanism,” J. Aeronautics and Astronautics, vol. 46, no. 12, pp. 2302–2310, 2020.
  • [26] Y. Zhichao and Z. Ruihong, “Improved Siamese network tracking algorithm combined with deep contour features,” J.Xidian Univ., vol. 47, no. 3, pp. 40–49, 2020.
  • [27] B. Li, J. Yan, W. Wu, Z. Zhu and X. Hu, “High Performance Visual Tracking with Siamese Region Proposal Network,” 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2018, pp. 8971–8980, doi: 10.1109/CVPR.2018.00935.
  • [28] Held, David, S. Thrun, and Silvio Savarese, “Learning to track at 100 fps with deep regression networks,” Computer Vision – ECCV 2016. ECCV 2016. Lecture Notes in Computer Science, vol. 9905, pp. 749–765, doi: 10.1007/978-3-319-46448-0_45.
  • [29] Z. Hongwei, L. Xiaoxia, Z. Bin, and M. Qi, “Target tracking error correction method based on multi-scale suggestion frame,” Comput. Eng. Appl., vol. 56, no. 19, pp. 132–138, 2020.
  • [30] W. Junling and W. Shuohao, “Deep learning target tracking algorithm based on Siamese network,” Comput. Eng. Des., vol. 40, no. 10, pp. 3014–3019, 2019.
  • [31] Q. Zhuling, Z. Yufei, Z. Peng, and W. Min, “Visual tracking algorithm based on Siamese neural network online discriminant features,” Acta Optica Sinica, vol. 39, no. 9, pp. 253–261, 2019.
  • [32] Yan, Bin, et al, “LightTrack: Finding Lightweight Neural Networks for Object Tracking via One-Shot Architecture Search,” 2021CVP, 2021, pp.15180-15189.
  • [33] Z. Tengfei, Z. Shuren, and P. Jian, “Adaptive selection tracking system based on dual Siamese network,” Comput. Eng., vol. 46, no. 6, pp. 103–107, 2020.
  • [34] Y. Kang, S. Huihui, and Z. Kaihua, “Real-time visual tracking based on dual attention Siamese network,” Comput. Appl., vol. 39, no. 6, pp. 1652–1656, 2019.
  • [35] C. Xu and M. Zhaohui, “Overview of target video tracking algorithms based on deep learning,” Comput. Syst. Appl., vol. 28, no. 1, pp. 1–9, 2019.
  • [36] S. Lulu, Z. Suofi, andW. Xiaofu, “Target tracking based on Tiny Darknet fully convolutional Siamese network,” J. Nanjing Univ. Posts and Telecommun., vol. 38, no. 4, pp. 89–95, 2018.
  • [37] J. Hu, L. Shen, S. Albanie, G. Sun, and E. Wu, “Squeezeand-Excitation Networks,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 42, no. 8, pp. 2011–2023, 1 Aug. 2020, doi: 10.1109/TPAMI.2019.2913372.
  • [38] Pu, Shi, et al, “Deep attentive tracking via reciprocative learning,” Available: https://arxiv.org/abs/1810.03851.
  • [39] Z. Dawei et al, “Learning Fine-Grained Similarity Matching Networks for Visual Tracking,” 2020 Int. Multimedia Retrieval Conf., 2020, pp. 296–300, doi: 10.1145/3372278.3390729.
  • [40] H. Anfeng et al, “A twofold siamese network for real-time object tracking,” 2018CVPR, 2018, pp. 4834–4843.
Uwagi
Opracowanie rekordu ze środków MEiN, umowa nr SONP/SP/546092/2022 w ramach programu "Społeczna odpowiedzialność nauki" - moduł: Popularyzacja nauki i promocja sportu (2022-2023).
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-4add0f03-77c2-421a-8194-62bb0f512922
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.