PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Tytuł artykułu

Crowdsourcing Evaluation of Video Summarization Algorithm

Treść / Zawartość
Identyfikatory
Warianty tytułu
Języki publikacji
EN
Abstrakty
EN
The technique of video summarizing involves selecting the most relevant and informative sections of a video to generate its shortened and faster version. Crowdsourcing is a relatively new term that has been exploited in the present study to achieve video summarization. This technique helps in dividing a task into multiple parts, and each of these parts is then evaluated using a large group of individuals to solve problems that are otherwise difficult to solve using traditional computational machines. In this study, we offer a crowdsourcing subjective experiment in which summaries of processed video sequences are evaluated. Thus, we are proposing an experiment that utilizes crowdsourcing to evaluate the efficacy of an algorithm that summarizes videos. A group of 45 individuals participated in the experiment, where each of them were asked to watch 24 videos, each of 30-second and 45-second duration. An experimental comparison was conducted with respect to presentation order and random selection methods. A content-based video segmentation was also used to represent different levels of complexities and visual richness. The findings of the assessment showed that specific characteristics of a video such as its length, complexity, and content, play a major role in improving the performance of the summarization algorithm. This study is an essential step toward the development of video summarizing systems that are both more accurate and more efficient.
Twórcy
  • AGH University of Krakow, Poland
  • AGH University of Krakow, Poland
autor
  • AGH University of Krakow, Poland
Bibliografia
  • [1] M. Grega, L. Janowski, M. Leszczuk, P. Romaniak, and Z. Papir, “Quality of experience evaluation for multimedia services,” Przeglad Telekomunikacyjny I Wiadomości Telekomunikacyjne, vol. 81, pp. 142-153, 01 2008. [Online]. Available: https://sigma-not.pl/publikacja-34775-quality-of-experience-evaluation-for-multimedia-services-przeglad-telekomunikacyjny-2008-4.html
  • [2] T. Hoßfeld, C. Keimel, M. Hirth, B. Gardlo, J. Habigt, K. Diepold, and P. Tran-Gia, “Best practices for qoe crowdtesting: Qoe assessment with crowdsourcing,” IEEE Transactions on Multimedia, vol. 16, no. 2, pp. 541-558, 2014. [Online]. Available: https://doi.org/10.1109/TMM.2013 .2291663.
  • [3] M. Leszczuk, M. Grega, A. Koźbiał, J. Gliwski, K. Wasieczko, and K. Smaïli, “Video summarization framework for newscasts and reports - work in progress,” in Multimedia Communications, Services and Security, A. Dziech and A. Czyżewski, Eds. Cham: Springer International Publishing, 2017, pp. 86-97. [Online]. Available: https://doi.org/10.1007/978-3-319-69911-0 7.
  • [4] S.-Y. Wu, R. Thawonmas, and K.-T. Chen, “Video summarization via crowdsourcing,” in CHI ’11 Extended Abstracts on Human Factors in Computing Systems, ser. CHI EA ’11. New York, NY, USA: Association for Computing Machinery, 2011, p. 1531-1536. [Online]. Available: https://doi.org/10.1145/1979742.1979803
  • [5] M. Leszczuk, L. Janowski, J. Nawała, and M. Grega, “User-generated content (ugc)/in-the-wild video content recognition,” in Intelligent Information and Database Systems, N. T. Nguyen, T. K. Tran, U. Tukayev, T.-P. Hong, B. Trawi´nski, and E. Szczerbicki, Eds. Cham: Springer Nature Switzerland, 2022, pp. 356-368. [Online]. Available: https://doi.org/10.1007/978-3-031-21967-2 29.
  • [6] M. Leszczuk and M. Duplaga, “Algorithm for video summarization of bronchoscopy procedures,” Biomedical engineering online, vol. 10, p. 110, 12 2011. [Online]. Available: https://doi.org/10.1186/1475-925 X-10-110.
  • [7] P. Romaniak, M. Muy, A. Mauthe, S. D’Antonio, and M. Leszczuk, “Framework for the integrated video quality assessment,” Multimedia Tools and Applications, vol. 61, 12 2011. [Online]. Available: https://doi.org/10.1007/s11042-011-0946-3.
  • [8] W.-T. Tsai, L. Zhang, S. Hu, Z. Fan, and Q. Wang, “Crowdtesting practices and models: An empirical approach,” Information and Software Technology, vol. 154, p. 107103, 2023. [Online]. Available: https://doi.org/10.1016/j.infsof.2022.107103.
  • [9] M. Shahid, J. Søgaard, J. Pokhrel, K. Brunnstr¨om, K. Wang, S. Tavakoli, and N. Gracia, “Crowdsourcing based subjective quality assessment of adaptive video streaming,” in 2014 Sixth International Workshop on Quality of Multimedia Experience (QoMEX), 2014, pp. 53-54. [Online]. Available: https://doi.org/10.1109/QoMEX.2014.6982289.
  • [10] International Telecommunication Union (ITU). [Online]. Available: https://www.itu.int/rec/T-REC-P.910-202207-I/en
  • [11] “Pyscenedetect.” [Online]. Available: https://www.scenedetect.com
  • [12] AGH Video Quality of Experience (QoE). [Online]. Available: https://qoe.agh.edu.pl/indicators/#indicators
  • [13] P. Romaniak, L. Janowski, M. Leszczuk, and Z. Papir, “Perceptual quality assessment for h.264/avc compression,” in 2012 IEEE Consumer Communications and Networking Conference (CCNC), 2012, pp. 597-602. [Online]. Available: https://doi.org/10.1109/CCNC.2012.6181021.
  • [14] M. Leszczuk, M. Kobosko, J. Nawała, F. Korus, and M. Grega, “In the wild” video content as a special case of user generated content and a system for its recognition,” Sensors, vol. 23, no. 4, 2023. [Online]. Available: https://doi.org/10.3390/s23041769
  • [15] A. Badiola, A. M. Zorrilla, B. Garcia-Zapirain Soto, M. Grega, M. Leszczuk, and K. Smaïli, “Evaluation of improved components of amis project for speech recognition, machine translation and video/audio/text summarization,” in Multimedia Communications, Services and Security, A. Dziech, W. Mees, and A. Czyżewski, Eds. Cham: Springer International Publishing, 2020, pp. 320-331. [Online]. Available: https://doi.org/10.1007/978-3-030-59000-0 24
  • [16] ffmpeg Application. [Online]. Available: https://www.ffmpeg.org
  • [17] N. Cieplińska, L. Janowski, K. De Moor, and M. Wierzchoń, “Long-term video qoe assessment studies: A systematic review,” IEEE Access, vol. 10, pp. 133 883-133 897, 2022. [Online]. Available: https://doi.org/10.1109/ACCESS.2022.3231747.
  • [18] P. Pérez, L. Janowski, N. García, and M. Pinson, “Subjective assessment experiments that recruit few observers with repetitions (FOWR),” IEEE Transactions on Multimedia, vol. 24, pp. 3442-3454, 2022. [Online]. Available: https://doi.org/10.1109/TMM.2021.3098450
  • [19] Github repository. [Online]. Available: https://github.com/dutta-agh/TANGO A-B
  • [20] K. Borchert, A. Seufert, E. Gamboa, M. Hirth, and T. Hossfeld, “In vitro vs in vivo: Does the study’s interface design influence crowdsourced video qoe?” Quality and User Experience, vol. 6, 12 2021. [Online]. Available: https://doi.org/10.1007/s41233-020-00041-2
  • [21] M. Leszczuk, L. Janowski, J. Nawała, and A. Boev, “Objective video quality assessment method for face recognition tasks,” Electronics, vol. 11, no. 8, 2022. [Online]. Available: https://www.mdpi.com/2079-9292/11/8/1167
  • [22] Crowdsourcing Evaluation of Video Summarization. [Online]. Available: http://pbz.kt.agh.edu.pl/∼testySubiektywne/QoE_Dutta/TANGO/
Uwagi
Opracowanie rekordu ze środków MNiSW, umowa nr POPUL/SP/0154/2024/02 w ramach programu "Społeczna odpowiedzialność nauki II" - moduł: Popularyzacja nauki (2025).
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-78f95b4d-0eb9-4213-afa3-825421492546
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.