PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Tytuł artykułu

Shallow Layer Convolutional Features with Correlation Filters for UAV Object Tracking

Treść / Zawartość
Identyfikatory
Warianty tytułu
Języki publikacji
EN
Abstrakty
EN
In this paper, convolutional shallow features are proposed for unmanned aerial vehicle (UAV) tracking. These convolutional shallow features are generated by pre-trained convolutional neural networks (CNN) and are used to represent the target objects. Furthermore, to estimate the location of the target objects, an adaptive correlation filter based on the Fourier transform is used. This filter is multiplied with the convolutional shallow features by using pixel-wise multiplication in the Fourier domain. Then, the inverse of Fourier is performed to estimate the location of the target object, where its location is represented by the maximum value of the response map. Unfortunately, the target object always changes its appearance during tracking. Therefore, we proposed an updated model to address this issue. The proposed method is evaluated by using the UAV123 10fps benchmark dataset. Based on the comprehensive experimental results, the proposed method performs favorably against state-of-the-art tracking algorithms.
Rocznik
Tom
Strony
49--57
Opis fizyczny
Bibliogr. 27 poz., rys., tab.
Twórcy
  • School of Electrical Engineering, Telkom University, Bandung, Indonesia, 40257
  • School of Electrical Engineering, Telkom University, Bandung, Indonesia, 40257
  • School of Electrical Engineering, Telkom University, Bandung, Indonesia, 40257
  • School of Electrical Engineering, Telkom University, Bandung, Indonesia, 40257
Bibliografia
  • [1] X. Qin and T. Wang, „Visual-based tracking and control algorithm design for quadcopter UAV", in Proc. of the Chinese Control and Decision Conf. CCDC 2019, Nanchang, China, 2019 (DOI: 10.1109/CCDC.2019.8832545).
  • [2] Z. Zheng and H. Yao, „A method for UAV tracking target in obstacle environment", in Proc. Chinese Autom. Congr. CAC 2019, Nanchang, China, 2019, pp. 4639-4644 (DOI: 10.1109/CCDC.2019.8832545).
  • [3] M. Mueller, N. Smith, and B. Ghanem, „A benchmark and simulator for UAV tracking", in in Computer Vision - ECCV 2016. 14th European Conference, Amsterdam, The Netherlands, October 11-14, 2016, Proceedings, Part I, B. Leibe, J. Matas, N. Sebe, and M. Welling, Eds. LNCS, vol. 9905, pp. 445-461. Cham: Springer, 2016 (DOI: 10.1007/978-3-319-46448-0 27).
  • [4] K. Nummiaro, E. Koller-Meier, and L. Van Gool, „An adaptive color-based particle filter", Image and Vis. Comput., vol. 21, no. 1, pp. 99-110, 2003 (DOI: 10.1016/S0262-8856(02)00129-4).
  • [5] S.-K. Weng, C.-M. Kuo, and S.-K. Tu, „Video object tracking Rusing adaptive Kalman filter", J. of Visual Commun. and Image Represent., vol. 17, no. 6, pp. 1190-1208, 2006 (DOI: 10.1016/j.jvcir.2006.03.004).
  • [6] S. A. Wibowo, H. Lee, E. K. Kim, and S. Kim, „Tracking failures detection and correction for face tracking by detection approach based on fuzzy coding histogram and point representation", in Proc. of the Int. Conf. on Fuzzy Theory and Its Appl. iFUZZY 2015, Yilan, Taiwan, 2015, pp. 34-39 (DOI: 10.1109/iFUZZY.2015.7391890).
  • [7] H. Grabner, M. Grabner, and H. Bischof, „Real-time tracking via on-line boosting", in Proc. of the 17th British Machine Vision Conf. BMVC 06, Edinburgh, Scotland, 2006 (DOI: 10.5244/C.20.6).
  • [8] H. Grabner, C. Leitsner, and H. Bischof, „Semi-supervised on-line boosting for robust tracking", in Computer Vision - ECCV 2008 10th European Conference on Computer Vision, Marseille, France, October 12-18, 2008, Proceedings, Part I, D. Forsyth, P. Torr, and A. Zisserman, Eds. LNCS, vol. 5302, pp. 234-247. Berlin, Heidelberg: Springer, 2008 (DOI: 10.1007/978-3-540-88682-2 19).
  • [9] B. Babenko, M.-H. Yang, and S. Belongie, „Robust object tracking with online multiple instance learning", IEEE Trans. on Pattern Anal. and Mach. Intell., vol. 33, no. 8, pp. 1619-1632, 2011 (DOI: 10.1109/TPAMI.2010.226).
  • [10] K. Zhang and H. Song, „Real-time visual tracking via online weighted multiple instance learning", Pattern Recogn., vol. 46, no. 1, pp. 397-411, 2013 (DOI: 10.1016/j.patcog.2012.07.013).
  • [11] Z. Kalal, K. Mikolajczyk, and J. Matas, „Tracking-learning-detection", IEEE Trans. on Pattern Anal. and Mach. Intell., vol. 34, no. 7, pp. 1409-1422, 2012 (DOI: 10.1109/TPAMI.2011.239).
  • [12] X. Mei, H. Ling, Y. Wu, E. P. Blasch, and L. Bai, „Efficient minimum error bounded particle resampling L1 tracker with occlusion detection", IEEE Trans. on Image Process., vol. 22, no. 7, pp. 2661-2675, 2013 (DOI: 10.1109/TIP.2013.2255301).
  • [13] C. Bao, Y.Wu, H. Ling, and H. Ji, „Real time robust L1 tracker Rusing accelerated proximal gradient approach", in Proc. of IEEE Conf. on Comp. Vision and Pattern Recogn., Providence, RI, USA, 2012, pp. 1830-1837 (DOI: 10.1109/CVPR.2012.6247881).
  • [14] W. Zhong, H. Lu, and M.-H. Yang, „Robust object tracking via sparse collaborative appearance model", IEEE Trans. on Image Process., vol. 23, no. 5, pp. 2356-2368, 2014 (DOI: 10.1109/TIP.2014.2313227).
  • [15] S. A. Wibowo, H. Lee, E. K. Kim, and S. Kim, „Fast generative approach based on sparse representation for visual tracking", in Proc. of the Joint 8th Int. Conf. on Soft Comput. and Intell. Syst. SCIS and 17th Int. Symp. on Adv. Intell. Sys. ISIS, Sapporo, Japan, 2016, pp. 778-783 (DOI: 10.1109/SCIS-ISIS.2016.0169).
  • [16] D. S. Bolme, I. R. Beveridge, B. A. Draper, and Y. M. Lui, „Visual object tracking using adaptive correlation filter", in Proc. of the IEEE Conf. on Comp. Vision and Pattern Recogn. CVPR'10, San Francisco, CA, USA, 2010, pp. 1401-1409 (DOI: 10.1109/CVPR.2010.5539960).
  • [17] J. F. Henriques, R. Caseiro, P. Martins, and J. Batista, „High-speed tracking with kernelized correlation filters", IEEE Trans. of Pattern Anal. and Mach. Intell., vol. 37, no. 3, pp. 583-596, 2015 (DOI: 10.1109/TPAMI.2014.2345390).
  • [18] S. A. Wibowo, H. Lee, E. K. Kim, and S. Kim, „Multi-scale color features based on correlation filter for visual tracking", in Proc. Of the 1st Int. Conf. on Sig. and Sys. ICSigSys, Bali, Indonesia, 2017, pp. 272-277 (DOI: 10.1109/ICSIGSYS.2017.7967055).
  • [19] M. Danelljan, G. Hager, F. S. Khan, and M. Felsberg, „Accurate scale estimation for robust visual tracking", in Proc. of the British Mach. Vis. Conf. BMVC'14, Nottingham, UK, 2014 (DOI: 10.5244/C.28.65).
  • [20] K. Zhang, L. Zhang, Q. Liu, D. Zhang, and M.-H. Yang, „Fast Visual tracking via dense spatio-temporal context learning", in Computer Vision - ECCV 2014. 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V, D. Fleet, T. Pajdla, B. Schiele, and T. Tuytelaars, Eds. LNCS, vol. 8693, pp. 127-141. Cham: Springer, 2014 (DOI: 10.1007/978-3-319-10602-1 9).
  • [21] L. Bertinetto, J. Valmadre, S. Golodetz, O. Miksik, and P. H. S. Torr, „Staple: complementary learners for real-time tracking", in Proc. of the IEEE Conf. on Comp. Vis. and Pattern Recogn. CVPR'16, Las Vegas, NV, USA, 2016, pp. 1401-1409 (DOI: 10.1109/CVPR.2016.156).
  • [22] S. A. Wibowo, H. Lee, E. K. Kim, and S. Kim, „Visual cracking based on complementary learners with distractor handling", Mathem. Probl. in Engin., vol. 2017, article ID 5295601, 2017 (DOI: 10.1155/2017/5295601).
  • [23] D. A. Ross, J. Lim, R. S. Lin, and M.-H. Yang, „Incremental learning for robust visual tracking", Int. J. of Comp. Vision, vol. 77, no. 1, pp. 125-141, 2008 (DOI: 10.1007/s11263-007-0075-7).
  • [24] K. Simonyan and A. Zisserman, „Very deep convolutional neural networks for large-scale image recognition", in Proc. of the 3rd Int. Conf. on Learn. Represent., San Diego, CA, USA, 2015 [Online]. Available: https://arxiv.org/pdf/1409.1556.pdf
  • [25] X. Jia, H. Lu, and M.-H. Yang, „Visual tracking via adaptive structural local sparse appearance model", in Proc. of the Int. Conf. on Comp. Vis. and Pattern Recogn., Providence, RI, USA, 2012 (DOI: 10.1109/CVPR.2012.6247880).
  • [26] S. Hare, A. Saffari, and P. H. S. Torr, „Struck: Structured output tracking with kernels", in Proc. of the Int. Conf. on Comp. Vision, Barcelona, Spain, 2011 (DOI: 10.1109/ICCV.2011.6126251).
  • [27] F. Henriques, R. Caseiro, P. Martins, and J. Batista, „Exploiting the circulant structure of tracking-by-detection with kernels", in Computer Vision - ECCV 2012. 12th European Conference on Computer Vision, Florence, Italy, October 7-13, 2012. Proceedings, Part IV, A. Fitzgibbon et al., Eds. LNCS, vol. 7575, pp. 702-715. Berlin, Heidelberg. Springer, 2012 (DOI: 10.1007/978-3-642-33765-9 50).
Uwagi
Opracowanie rekordu ze środków MNiSW, umowa nr SONP/SP/546092/2022 w ramach programu "Społeczna odpowiedzialność nauki" - moduł: Popularyzacja nauki i promocja sportu (2024).
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-7c7e9737-c896-4d87-a5ca-c3363b683dff
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.