PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Tytuł artykułu

People tracking in video surveillance systems based on artificial intelligence

Treść / Zawartość
Identyfikatory
Warianty tytułu
Języki publikacji
EN
Abstrakty
EN
As security is one of the basic human needs, we need security systems that can prevent crimes from happen‐ ing. In general, surveillance videos are used to observe the environment and human behavior in a given location. However, surveillance videos can only be used to record images or videos, without additional information. There‐ fore, more advanced cameras are needed to obtain other additional information such as the position and move‐ ment of people. This research extracted this information from surveillance video footage using a person tracking, detection, and identification algorithm. The framework for these is based on deep learning algorithms, a popu‐ lar branch of artificial intelligence. In the field of video surveillance, person tracking is considered a challenging task. Many computer vision, machine learning, and deep learning techniques have been developed in recent years. The majority of these techniques are based on frontal view images or video sequences. In this work, we will compare some previous work related to the same topic.
Twórcy
autor
  • Intelligent Processing and Security of System Team, Faculty of Science, Mohammed V University, Rabat, Morocco
  • Intelligent Processing and Security of System Team, Faculty of Science, Mohammed V University, Rabat, Morocco
autor
  • Intelligent Processing and Security of System Team, Faculty of Science, Mohammed V University, Rabat, Morocco
Bibliografia
  • [1] A. W. Senior, G. Potamianos, S. Chu, Z. Zhang, A. Hampapur. A comparison of multicamera person‐tracking algorithms. IBM T. J. Watson esearch Center, PO Box 704, Yorktown Heights, NY 10598, USA.
  • [2] S. Yu, Y. Yang, X. Li, and A. G. Hauptmann.“Long‐Term Identity‐Aware Multi‐Person Tracking for Surveillance Video Summarization,” arXiv:1604.07468v2 [cs.CV] 11 Apr 2017.
  • [3] F. Fleuret, J. Berclaz, R. Lengagne, and P. Fua.“Multicamera People Tracking with a Probabilistic Occupancy Map,” IEEE TPAMI, 2008, the work was supported in part by the Swiss Federal Office for Education and Science and in part by the Indo Swiss Joint Research Programme (ISJRP).
  • [4] Book Matchmoving: The Invisible Art of Camera Tracking, by Tim Dobbert, Sybex, Feb 2005, ISBN 0‐7821‐4403‐9. Peter Mountney, Danail Stoyanov & Guang‐Zhong Yang (2010).
  • [5] Lyudmila Mihaylova, Paul Brasnett, Nishan Canagarajan, and David Bull. “Object Tracking by Particle Filtering Techniques in Video Sequences,” in Advances and Challenges in Multisensor Data and Information. NATO Security Through Science Series, 8 (Netherlands: IOS Press, 2007). pp. 260–268.
  • [6] K. Chandrasekaran (2010). Theses : Parametric & non‐parametric background subtraction model with object tracking for VENUS. Rochester Institute of Technology.
  • [7] L. Bao, B. Wu, and W. Liu. “CNN in MRF: T1: Video Object Segmentation via Inference in a CNN‐Based Higher‐Order Spatiotemporal MRF,” IEEE Conference on Computer Vision and Pattern Recognition, 2018, DOI: 10.1109/CVPR.2018.00626.
  • [8] C. Feichtenhofer, A. Pinz, and A. Zisserman. “Detect to Track and Track to Detect,” IEEE International Conference on Computer Vision, 2017.
  • [9] B. Li, J. Yan, W. Wu, Z. Zhu, and X. Hu. “High Performance Visual Tracking with Siamese Region Proposal Network,” IEEE Conference on Computer Vision and Pattern Recognition, 2018.
  • [10] M. Danelljan, G. Bhat, F. S. Khan, M. Felsberg, et al. “Eco: Efficient Convolution Operators for Tracking,” IEEE Conference on Computer Vision and Pattern Recognition, 2017.
  • [11] TY ‐ BOOK, Qiang Wang, Li Zhang, Luca Bertinetto, Weiming Hu, and Philip H.S Torr, PY ‐2019/06/01 , T1‐“Fast Online Object Tracking and Segmentation: A Unifying Approach” , DOI: 10.1109/CVPR.2019.00142ER
  • [12] Z. Zhu, Q. Wang, B. Li, W. Wu, J. Yan, and W. Hu. “Distractor‐Aware Siamese Networks for Visual Object Tracking,” European Conference on Computer Vision, 2018.
  • [13] T. Yang, and A. B. Chan. “Learning Dynamic Memory Networks for Object Tracking.” In European Conference on Computer Vision, 2018. Vol. 1. ISBN 9780549524892.
  • [14] Background subtraction is the process by which we segment moving regions in image sequences. “Basic Concept and Technical Terms”. Ishikawa Watanabe Groupe Laboratory, University of Tokyo. Retrieved 12 February 2015.
  • [15] JOUR, M., Peter, S. Danail Y, Guang‐Zhong. “Three‐Dimensional Tissue Deformation Recovery and Tracking: Introducing Techniques Based on Laparoscopic or Endoscopic Images,” JO‐IEEE Signal Processing Magazine, 27, July, 2010.SP‐14, EP‐24.
  • [16] Z. Pang, Z. Li, and N. Wang. “Simpletrack: Understanding and Rethinking 3D Multi‐Object Tracking,” arXiv:2111.09621v1 [cs.CV] 18 Nov 2021.
  • [17] C. Wang, A. Bochkovskiy, and H. M. Liao. “YOLOv7: Trainable Bag‐of‐Freebies Sets New State‐of‐the‐Art for Real‐Time Object Detectors,” arXiv:2207.02696v1 [cs.CV] 6 Jul 2022.
  • [18] P. Dai, R. Weng, W. Choi, C. Zhang, Z. He, and W. Ding: “Learning a Proposal Classifier for Multiple Object Tracking.” In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, 2443–2452.
  • [19] TY ‐ BOOK, J. N. Zaech, A. Liniger, D. Dai, M. Danelljan, and L. Van Gool. “Learnable Online Graph Representations for 3D Multi‐Object Tracking,” IEEE Robotics and Automation Letters, 2022 PY ‐ 2021/04/23.
  • [20] L. Lin, H. Fan, Y. Xu, and H. Ling. “Swintrack: A Simple and Strong Baseline for Transformer Tracking,” arXiv preprint arXiv:2112.00995, 2021.
  • [21] J. Pang, L. Qiu, X. Li, H. Chen, Q. Li, T. Darrell, and F. Yu: “Quasidense Similarity Learning for Multiple Object Tracking,” Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 164–173.
  • [22] F. Zeng, B. Dong, T. Wang, X. Zhang, and Y. Wei. “Motr: End‐to‐End Multiple‐ObjectTracking with Transformer,” arXiv preprint arXiv:2105.03247, 2021.
  • [23] J.‐N. Zaech, A. Liniger, M. Danelljan, D. Dai, and L. Van Gool. “Adiabatic Quantum Computing for Multi Object Tracking,” Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 8811–8822.
  • [24] X. Han, Q. You, C. Wang, Z. Zhang, P. Chu, H. Hu, J. Wang, and Z. Liu. “Mmptrack: Large‐Scale Densely Annotated Multi‐Camera Multiple People Tracking Benchmark,” arXiv preprint arXiv:2111.15157, 2021.
  • [25] X. Zhang, X.Wang, and C. Gu. “Online Multi‐Object Tracking with Pedestrian Re‐Identification and Occlusion Processing,” The Visual Computer, vol. 37, no. 5, 2021, pp. 1089–1099.
  • [26] K. Cho, and D. Cho. “Autonomous Driving Assistance with Dynamic Objects using Traffic Surveillance Cameras,” Applied Sciences, vol. 12, no. 12, 2022, p. 6247.
  • [27] A. Cioppa, S. Giancola, A. Deliege, L. Kang, X. Zhou, Z. Cheng, B. Ghanem, and M. Van Droogenbroeck. “Soccernet‐Tracking: Multiple Object Tracking Dataset and Benchmark in Soccer Videos,” Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 3491–3502.
Uwagi
Opracowanie rekordu ze środków MEiN, umowa nr SONP/SP/546092/2022 w ramach programu "Społeczna odpowiedzialność nauki" - moduł: Popularyzacja nauki i promocja sportu (2024).
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-0f132c60-ee83-4ece-8095-d6eed9a4afe9
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.