PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Tytuł artykułu

Real-time motion tracking using optical flow on multiple GPUs

Treść / Zawartość
Identyfikatory
Warianty tytułu
Języki publikacji
EN
Abstrakty
EN
Motion tracking algorithms are widely used in computer vision related research. However, the new video standards, especially those in high resolutions, cause that current implementations, even running on modern hardware, no longer meet the needs of real-time processing. To overcome this challenge several GPU (Graphics Processing Unit) computing approaches have recently been proposed. Although they present a great potential of a GPU platform, hardly any is able to process high definition video sequences efficiently. Thus, a need arose to develop a tool being able to address the outlined problem. In this paper we present software that implements optical flow motion tracking using the Lucas-Kanade algorithm. It is also integrated with the Harris corner detector and therefore the algorithm may perform sparse tracking, i.e. tracking of the meaningful pixels only. This allows to substantially lower the computational burden of the method. Moreover, both parts of the algorithm, i.e. corner selection and tracking, are implemented on GPU and, as a result, the software is immensely fast, allowing for real-time motion tracking on videos in Full HD or even 4K format. In order to deliver the highest performance, it also supports multiple GPU systems, where it scales up very well.
Rocznik
Strony
139--150
Opis fizyczny
Bibliogr. 50 poz., rys., fot., wykr.
Twórcy
  • University of Mons, 20 Parc Sq., B7000 Mons, Belgium
autor
  • Poznań Supercomputing and Networking Center, 10 Noskowskiego St., 61-704 Poznań,Poland
  • Poznań University of Technology, Poznań, 2 Piotrowo St., 60-965 Pozńań, Poland
autor
  • University of Mons, 20 Parc Sq., B7000 Mons, Belgium
autor
  • Poznań Supercomputing and Networking Center, 10 Noskowskiego St., 61-704 Poznań, Poland
Bibliografia
  • [1] N. Ohnishi and A. Imiya, “Dominant plane detection from optical flow for robot navigation”, Pattern Recognition Letters 27 (9), 1009-1021 (2006).
  • [2] K. Aires, A. Santana, and A. Medeiros, “Optical flow using color information”, Proc. 2008 ACM symp. on Applied Computing 1, 1607-1611 (2008).
  • [3] A. Fonseca, L. Mayron, D. Socek, and O. Marques, “Design and implementation of an optical flow-based autonomous video surveillance system”, Proc. IASTED 1, 209-214 (2008).
  • [4] J. Gibson, The Perception of the Visual World, Houghton Mifflin, Boston, 1950.
  • [5] B. Horn and B. Schunck, “Determining optical flow”, Artificial Intelligence 2, 185-203 (1981).
  • [6] B. Lucas and T. Kanade, “An iterative image registration technique with an application to stereo vision”, Proc. Imaging Understanding Workshop 1, 121-130 (1981).
  • [7] M. Kulczewski, K. Kurowski, M. Kierzynka, M. Dohnalik, J. Kaczmarczyk, and A. Borujeni, “Modern hardware architectures accelerate porous media flow computations”, AIP Conference Proc. 1, 1453, 161-166 (2012).
  • [8] M. Ciznicki, M. Kierzynka, K. Kurowski, B. Ludwiczak, K. Napierala, and J. Palczynski, “Efficient isosurface extraction using marching tetrahedra and histogram pyramids on multiple GPUs”, Lecture Notes in Computer Science 7204, 343-352 (2012).
  • [9] S.A. Mahmoudi, F. Lecron, P. Manneback, M. Benjelloun, and S. Mahmoudi, “GPU-based segmentation of cervical vertebra in X-ray images”, HPCCE Workshop, IEEE Int. Conf. on Cluster Computing 1, 1-8 (2010).
  • [10] F. Lecron, S.A. Mahmoudi, M. Benjelloun, S. Mahmoudi, and P. Manneback, “Heterogeneous computing for vertebra detection and segmentation in X-ray images”, Int. J. Biomedical Imaging: Parallel Computation in Medical Imaging Applications 2011, 1-12 (2011).
  • [11] S.A. Mahmoudi, F. Lecron, P. Manneback, M. Benjelloun, and S. Mahmoudi, “Efficient exploitation of heterogeneous platforms for vertebra detection in X-ray images”, Proc. Biomedical Engineering International Conf. 1, 1-6 (2012).
  • [12] J. Blazewicz, W. Frohmberg, M. Kierzynka, E. Pesch, and P. Wojciechowski, “Protein alignment algorithms with an efficient backtracking routine on multiple GPUs”, BMC Bioinformatics 12, 181 (2011).
  • [13] J. Blazewicz, W. Frohmberg, M. Kierzynka, and P. Wojciechowski, “G-PAS 2.0 - an improved version of protein alignment tool with an efficient backtracking routine on multiple GPUs”, Bull. Pol. Ac.: Tech. 60 (3), 491-494 (2012).
  • [14] J. Blazewicz, W. Frohmberg, M. Kierzynka, and P. Wojciechowski, “G-MSA - a GPU-based, fast and accurate algorithm for multiple sequence alignment”, J. Parallel and Distributed Computing 73 (1), 32-41 (2013).
  • [15] R. Nowotniak and J. Kucharski, “GPU-based tuning of quantum-inspired genetic algorithm for a combinatorial optimization problem”, Bull. Pol. Ac.: Tech. 60 (2), 323-330 (2012).
  • [16] M. Blazewicz, S. Brandt, M. Kierzynka, K. Kurowski, B. Ludwiczak, J. Tao, and J. Weglarz, “CaKernel - a parallel application programming framework for heterogenous computing architectures”, Scientific Programming 19 (4), 185-197 (2011).
  • [17] M. Blazewicz, I. Hinder, D. Koppelman, S. Brandt, M. Ciznicki, M. Kierzynka, F. Loffler, E. Schnetter, and J. Tao, “From physics model to results: An optimizing framework for crossarchitecture code generation”, Scientific Programming 21 (1-2), 1-16 (2013).
  • [18] M. Ciznicki, M. Kierzynka, P. Kopta, K. Kurowski, and P. Gepner, “Benchmarking data and compute intensive applications on modern CPU and GPU architectures”, Procedia Computer Science 9, 1900-1909 (2012).
  • [19] S.A. Mahmoudi and P. Manneback, “Efficient exploitation of heterogeneous platforms for images features extraction”, 3rd Int. Conf. on Image Processing Theory, Tools and Applications (IPTA) 1, 91-96 (2012).
  • [20] P. Ricardo Possa, S.A. Mahmoudi, N. Harb, and C. Valderrama, “A new self-adapting architecture for feature detection”, 22nd Int. Conf. on Field Programmable Logic and Applications 1, 643-646 (2012).
  • [21] P. Ricardo Possa, S.A. Mahmoudi, N. Harb, C. Valderrama, and P. Manneback, “A multi-resolution fpga-based architecture for real-time edge and corner detection”, IEEE Trans. on Computers 6, 130 (2013).
  • [22] S. Sinha, J.-M. Fram, M. Pollefeys, and Y. Genc, “GPU-based video feature tracking and matching”, EDGE, Workshop on Edge Computing Using New Commodity Architectures 1, CDROM (2006).
  • [23] Y. Mizukami and K. Tadamura, “Optical flow computation on Compute Unified Device Architecture”, Proc. 14th International Conf. on Image Analysis and Processing 1, 179-184 (2007).
  • [24] J. Huang, S. Ponce, S. Park, Y. Cao, and F. Quek, “GPUaccelerated computation for robust motion tracking using CUDA framework”, Proc. IET Int. Conf. on Visual Information Engineering 1, CD-ROM (2008).
  • [25] J. Marzat, Y. Dumortier, and A. Ducrot, “Real-time dense and accurate parallel optical flow using CUDA”, Proc. WSCG 1, 105-111 (2009).
  • [26] D. Douglas and T. Peucker, “Algorithms for the reduction of the number of points required to represent a digitized line or its caricature”, Cartographica: Int. J. Geographic Information and Geovisualization 10 (2), 112-122 (1973).
  • [27] H. Asada and M. Brady, “The curvature primal sketch”, IEEE Trans. on Pattern Analysis and Machine Intelligence 8 (1), 2-14 (1986).
  • [28] F. Mokhtarian and A. Mackworth, “Scale-based description and recognition of planar curves and two-dimensional shapes”, IEEE Trans. on Pattern Analysis and Machine Intelligence 8 (1), 34-43 (1986).
  • [29] R. Horaud, T. Skordas, and F. Veillon, “Finding geometric and relational structures in an image”, Proc. First Eu. Conf. Comp.Vision 1, 374-384 (1986).
  • [30] C. Harris and M. Stephens, “A combined corner and edge detector”, 4th Alvey Vision Conf. 15, 147-151 (1988).
  • [31] J. Bouguet, “Pyramidal implementation of the Lucas Kanade feature tracker”, Intel Corporation, Microprocessor Research Labs, 2000.
  • [32] H. Moravec, “Obstacle avoidance and navigation in the real world by a seeing robot rover”, Technical Report CMU-RI-TR-3, Carnegie-Mellon University, Pittsburgh, 1980.
  • [33] K. Rohr, “Recognizing corners by fitting parametric models”, Int. J. Computer Vision 9 (3), 213-230 (1992).
  • [34] R. Deriche and T. Blaszka, “Recovering and characterizing image features using an efficient model based approach”, Proc. Computer Vision and Pattern Recognition 1, 530-535 (1993).
  • [35] L. Parida, D. Geiger, and R. Hummel, “Junctions: detection, classification, and reconstruction”, IEEE Trans. on Pattern Analysis and Machine Intelligence 20 (7), 687-698 (1998).
  • [36] C. Schmid, R. Mohr, and C. Bauckhage, “Evaluation of interest point detectors”, Int. J. Coputer Vision 37 (2), 151-172 (2000).
  • [37] D. Lowe, “Distinctive image features from scale-invariant keypoints”, Int. J. Computer Vision (IJCV) 60 (2), 91-110 (2004).
  • [38] S. Zhu and K.-K. Ma, “A new diamond search algorithm for fast block-matching motion estimation”, IEEE Trans. on Image Processing 9 (2), 287-290 (2000).
  • [39] B.D. Lucas and T. Kanade, “An iterative image registration technique with an application to stereo vision (DARPA)”, Proc. 1981 DARPA Image Understanding Workshop 1, 121-130, April 1981.
  • [40] Y. sheng Chen, Y. ping Hung, and C. shann Fuh, “Fast block matching algorithm based on the winner-update strategy”, IEEE Trans. on Image Processing 10, 1212-1222 (2001).
  • [41] B. Kitt, B. Ranft, and H. Lategahn, “Block-matching based optical flow estimation with reduced search space based on geometric constraints”, Intelligent Transportation Systems 1, 1104-1109 (2010).
  • [42] S. Indu, M. Gupta, and A. Bhattacharyya, “Vehicle tracking and speed estimation using optical flow method”, Int. J. Engineering Science and Technology 3 (1), 429-434 (2011).
  • [43] E.L. Andrade, S. Blunsden, and R.B. Fisher, “Hidden markov models for optical flow analysis in crowds”, Proc. 18th Int. Conf. on Pattern Recognition - Volume 01, ICPR ’06, 1, 460-463 (2006).
  • [44] L. Rabiner, “A tutorial on hidden markov models and selected applications in speech recognition”, Proc. IEEE 77 (2), 257-286 (1989).
  • [45] Z. Kalal, K. Mikolajczyk, and J. Matas, “Face-TLD: Tracking- Learning-Detection applied to faces”, IEEE Int. Conf. on Image Processing 1, CD-ROM (2010).
  • [46] F. Cupillard, A. Avanzi, F. Bremond, and M. Thonnat, “Video understanding for metro surveillance, networking, sensing and control”, IEEE Int. Conf. on Networking, Sensing and Control 1, 186-191 (2004).
  • [47] N. Ihaddadene and C. Djeraba, “Real-time crowd motion analysis”, Proc. 19th Int. Conf. on Pattern Recognition (ICPR ’08) 1, CD-ROM (2008).
  • [48] C. Tomasi and T. Kanade, “Detection and tracking of point features”, Technical Report CMU-CS-91-132, pp. 383-394, Carnegie Mellon University, Pittsburgh, 1991.
  • [49] J.M. Ready and C.N. Taylor, “GPU acceleration of real-time feature based algorithms”, Proc. IEEE Workshop on Motion and Video Computing, WMVC ’07 (1), 8-9 (2007).
  • [50] N. Sundaram, T. Brox, and K. Keutzer, “Dense point trajectories by GPU-accelerated large displacement optical flow”, Tech. Rep. UCB/EECS-2010-104, University of California, Berkeley, 2010.
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-71fc2bea-c161-451a-baf0-6fc72e1f323d
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.