PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Tytuł artykułu

Unveiling the art of video enhancement: a comprehensive examination of content selection and sequencing for optimal quality in conventional and AR/VR environments

Treść / Zawartość
Identyfikatory
Warianty tytułu
Języki publikacji
EN
Abstrakty
EN
The demand for high-quality video content has grown along with the rise of new technologies. The quality of visual content directly impacts user engagement and satisfaction, highlighting a clear correlation between user expectations and content delivery. Recent studies stress how important it is to pick the right content, especially in fields such as signal processing and multimedia communication. But there are challenges, such as inconsistent content selection, a lack of standards, and not enough data. Using generative AI and machine learning can help address these issues. By embracing technology-driven, inclusive and teamwork-based methods, this review paper reviews better content and sequence choices in both traditional and AR/VR setups for video enhancement. The need for high-quality content has increased with time due to the emergence of new technologies. User engagement and satisfaction are directly proportional to the quality of visual, revealing a direct proportionality between user expectations and content delivery. Recent research in digital media has emphasized the importance of selecting a particular type of content, leading to an optimized user experience. Signal processing, multimedia communication, and image processing have been significant areas of interest for researchers in which content selection is of great importance. Factors such as motion characteristics and visual complexity must be considered for precise results. The main consequence emphasizes the focus on dynamic content, diversity, and UGC as a significant area of interest. Compared to the current literature, challenges such as content selection variability, no standardized criteria, and limited data sets that serve as benchmarks must be considered. Integrating machine learning algorithms into data sets alongside scenario-based criteria can be an essential solution to such problems. Adherence to technology-driven, inclusive, and collaborative approaches leads to a better outcome that ensures productivity.
Twórcy
  • AGH University of Krakow, Poland
  • AGH University of Krakow, Poland
autor
  • AGH University of Krakow, Poland
Bibliografia
  • [1] G. Cermak, M. Pinson, and S. Wolf, “The Relationship Among Video Quality, Screen Resolution, and Bit Rate,” IEEE Transactions on Broadcasting, vol. 57, no. 2, pp. 258-262, 2011.
  • [2] J. Allard, A. Roskuski, and M. Claypool, “Measuring and modeling the impact of buffering and interrupts on streaming video quality of experience,” in Proceedings of the 18th International Conference on Advances in Mobile Computing & Multimedia, ser. MoMM ’20. New York, NY, USA: Association for Computing Machinery, 2021, p. 153-160. [Online]. Available: https://doi.org/10.1145/3428690.3429173
  • [3] B. Spang, B. Walsh, T.-Y. Huang, T. Rusnock, J. Lawrence, and N. McKeown, “Buffer sizing and Video QoE Measurements at Netflix,” in Proceedings of the 2019 Workshop on Buffer Sizing, ser. BS ’19. New York, NY, USA: Association for Computing Machinery, 2020. [Online]. Available: https://doi.org/10.1145/3375235.3375241
  • [4] J. Leirpoll, D. Osborn, P. Murphy, and A. Edwards, The Cool Stuff in Premiere Pro. Berkeley, CA: Apress, 2017. [Online]. Available: http://link.springer.com/10.1007/978-1-4842-2890-6
  • [5] H. Zhang, M. A. Babar, and P. Tell, “Identifying relevant studies in software engineering,” Information and Software Technology, vol. 53, no. 6, p. 625-637, 2011. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0950584910002260
  • [6] H. Snyder, “Literature review as a research methodology: An overview and guidelines,” Journal of Business Research, vol. 104, p. 333-339, 2019. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0148296319304564
  • [7] K. Adnan, R. Akbar, and K. S. Wang, “Development of usability enhancement model for unstructured big data using SLR,” IEEE Access, vol. 9, pp. 87 391-87 409, 2021.
  • [8] M. Geyer, O. Bar-Tal, S. Bagon, and T. Dekel, “TokenFlow: Consistent Diffusion Features for Consistent Video Editing,” no. arXiv:2307.10373, 2023, arXiv:2307.10373 [cs]. [Online]. Available: http://arxiv.org/abs/2307.10373
  • [9] B. Peng, X. Zhang, J. Lei, Z. Zhang, N. Ling, and Q. Huang, “LVE-S2D: Low-Light Video Enhancement From Static to Dynamic,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 32, no. 12, pp. 8342-8352, 2022.
  • [10] D. Ceylan, C.-H. P. Huang, and N. J. Mitra, “Pix2Video: Video Editing using Image Diffusion,” no. arXiv:2303.12688, 2023, arXiv:2303.12688 [cs].[Online]. Available: http://arxiv.org/abs/2303.12688
  • [11] Y. Ye, J. M. Boyce, and P. Hanhart, “Omnidirectional 360° Video Coding Technology in Responses to the Joint Call for Proposals on Video Compression With Capability Beyond HEVC,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 30, no. 5, pp. 1241-1252, 2020.
  • [12] W. Chai, X. Guo, G. Wang, and Y. Lu, “StableVideo: Text-driven Consistency-aware Diffusion Video Editing,” no. arXiv:2308.09592, 2023, arXiv:2308.09592 [cs]. [Online]. Available: http://arxiv.org/abs/2308.09592
  • [13] S. D. Roy and M. K. Bhowmik, “Annotation and Benchmarking of a Video Dataset under Degraded Complex Atmospheric Conditions and Its Visibility Enhancement Analysis for Moving Object Detection,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 31, no. 3, pp. 844-862, 2021.
  • [14] W. Wang, Y. Jiang, K. Xie, Z. Liu, H. Chen, Y. Cao, X. Wang, and C. Shen, “Zero-Shot Video Editing Using Off-The-Shelf Image Diffusion Models,” no. arXiv:2303.17599, 2024, arXiv:2303.17599 [cs]. [Online]. Available: http://arxiv.org/abs/2303.17599
  • [15] X. Meng, X. Deng, S. Zhu, X. Zhang, and B. Zeng, “A Robust Quality Enhancement Method Based on Joint Spatial-Temporal Priors for Video Coding,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 31, no. 6, pp. 2401-2414, 2021.
  • [16] L. Bassbouss, S. Steglich, and I. Fritzsch, “Interactive 360° Video and Storytelling Tool,” in 2019 IEEE 23rd International Symposium on Consumer Technologies (ISCT), 2019, pp. 113-117.
  • [17] C. Cao, H. Yue, X. Liu, and J. Yang, “Unsupervised HDR Image and Video Tone Mapping via Contrastive Learning,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 34, no. 2, pp. 786-798, 2024.
  • [18] S. Chidambaram, S. S. Reddy, M. Rumple, A. Ipsita, A. Villanueva, T. Redick, W. Stuerzlinger, and K. Ramani, “EditAR: A Digital Twin Authoring Environment for Creation of AR/VR and Video Instructions from a Single Demonstration,” in 2022 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), 2022, pp. 326-335.
  • [19] F. Dobrian, V. Sekar, A. Awan, I. Stoica, D. Joseph, A. Ganjam, J. Zhan, and H. Zhang, “Understanding the impact of video quality on user engagement,” SIGCOMM Comput. Commun. Rev., vol. 41, no. 4, p. 362-373, 2011. [Online]. Available: https://doi.org/10.1145/2043164.2018478
  • [20] J. Merikivi, J. Bragge, E. Scornavacca, and T. Verhagen, “Binge-watching serialized video content: A transdis-ciplinary review,” Television & New Media, vol. 21, no. 7, p. 697-711, 2020. [Online]. Available: http://journals.sagepub.com/doi/10.1177/1527476419848578
  • [21] L. Pozueco, A. Álvarez, X. García, R. García, D. Melendi, and G. D´ıaz, “Subjective video quality evaluation of different content types under different impairments,” New Rev. Hypermedia Multimedia, vol. 23, no. 1, p. 1-28, 2017. [Online]. Available: https://doi.org/10.1080/13614568.2016.1152310
  • [22] C. Ma, Z. Shi, Z. Lu, S. Xie, F. Chao, and Y. Sui, “A Survey on Image Quality Assessment: Insights, Analysis, and Future Outlook,” 2025. [Online]. Available: https://arxiv.org/abs/2502.08540
  • [23] S. Jamil, “Review of Image Quality Assessment Methods for Compressed Images,” Journal of Imaging, vol. 10, no. 5, 2024. [Online]. Available: https://www.mdpi.com/2313-433X/10/5/113
  • [24] I. van der Linde and R. M. Doe, “Influence of affective image content on subjective quality assessment,” J. Opt. Soc. Am. A, vol. 29, no. 9, pp. 1948-1955, 2012. [Online]. Available: https://opg.optica.org/josaa/abstract.cfm?URI=josaa-29-9-1948
  • [25] Y. Zhu, I. Heynderickx, and J. A. Redi, “Understanding the role of social context and user factors in video quality of experience,” Computers in Human Behavior, vol. 49, pp. 412-426, 2015. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0747563215001752
  • [26] R. Evans, “Why Does Content Desirability Impact Subjective Video Quality Ratings and What Can Be Done About It?” Theses, Rice University, 2011. [Online]. Available: https://hdl.handle.net/1911/64431
  • [27] P. Kortum and M. Sullivan, “The effect of content desirability on subjective video quality ratings,” Human Factors: The Journal of the Human Factors and Ergonomics Society, vol. 52, no. 1, p. 105-118, 2010. [Online]. Available: http://journals.sagepub.com/doi/10.1177/0018720810366020
  • [28] J. J. Gross and R. W. L. and, “Emotion elicitation using films,” Cognition and Emotion, vol. 9, no. 1, pp. 87-108, 1995. [Online]. Available: https://doi.org/10.1080/02699939508408966
  • [29] K. de Moor, F. Mazza, I. Hupont, M. R´ıos Quintero, T. M¨aki, and M. Varela, “Chamber QoE: a multi-instrumental approach to explore affective aspects in relation to quality of experience,” in Human Vision and Electronic Imaging XIX, vol. 9014, San Francisco, CA, United States, 2014, p. 90140U. [Online]. Available: https://hal.science/hal-00952701
  • [30] M. H. Pinson, How To Choose Video Sequences For Video Quality Assessment, 2013. [Online]. Available: https://its.ntia.gov/publications/details.aspx?pub=2694
  • [31] M. H. Pinson, L. Janowski, and Z. Papir, “Video Quality Assessment: Subjective testing of entertainment scenes,” IEEE Signal Processing Magazine, vol. 32, no. 1, pp. 101-114, 2015.
  • [32] M. H. Pinson, M. Barkowsky, and P. Le Callet, “Selecting scenes for 2D and 3D subjective videoquality tests,” EURASIP Journal on Image and Video Processing, vol. 2013, no. 1, p. 50, 2013. [Online]. Available: https://jivp-eurasipjournals.springeropen.com/articles/10.1186/1687-5281-2013-50
  • [33] A. Raake, S. Borer, S. M. Satti, J. Gustafsson, R. R. R. Rao, S. Medagli, P. List, S. Göring, D. Lindero, W. Robitza, G. Heikkilä, S. Broom, C. Schmidmer, B. Feiten, U. Wüstenhagen, T. Wittmann, M. Obermann, and R. Bitto, “Multi-Model Standard for Bitstream-, Pixel-Based and Hybrid Video Quality Assessment of UHD/4K: ITU-T P.1204,” IEEE Access, vol. 8, pp. 193 020-193 049, 2020.
  • [34] G. Kougioumtzidis, V. Poulkov, Z. D. Zaharis, and P. I. Lazaridis, “A Survey on Multimedia Services QoE Assessment and Machine Learning-Based Prediction,” IEEE Access, vol. 10, pp. 19 507-19 538, 2022.
  • [35] Z. Hu, H. Yan, T. Yan, H. Geng, and G. Liu, “Evaluating QoE in VoIP networks with QoS mapping and machine learning algorithms,” Neurocomputing, vol. 386, pp. 63-83, 2020. [Online]. Available: https://api.semanticscholar.org/CorpusID:213362421
  • [36] X. HoangVan and H.-H. Nguyen, “Enhancing Quality for VVC Compressed Videos with Multi-Frame Quality Enhancement Model,” in 2020 International Conference on Advanced Technologies for Communications (ATC), 2020, pp. 172-176.
  • [37] X. Meng, X. Deng, S. Zhu, and B. Zeng, “Enhancing Quality for VVC Compressed Videos by Jointly Exploiting Spatial Details and Temporal Structure,” in 2019 IEEE International Conference on Image Processing (ICIP). Taipei, Taiwan: IEEE, 2019, p.1193-1197. [Online]. Available: https://ieeexplore.ieee.org/document/8804469/
  • [38] A. Sackl, R. Schatz, S. Suette, and M. Tscheligi, “From Low Vision to High Quality: Video QoE Enhancement for Visually Impaired Users,” in 2019 Eleventh International Conference on Quality of Multimedia Experience (QoMEX), 2019, pp. 1-6.
  • [39] J. M. Jose Valanarasu, R. Garg, A. Toor, X. Tong, W. Xi, A. Lugmayr, V. M. Patel, and A. Menini, “Rebotnet: Fast real-time video enhancement,” in 2025 IEEE/CVF WinterConference on Applications of Computer Vision (WACV),2025, pp. 1424-1435.
  • [40] N. J. Avanaki, S. Zadtootaghaj, N. Barman, S. Schmidt, M. G. Martini, and S. Moller, “Quality Enhancement of Gaming Content using Generative Adversarial Networks,” in 2020 Twelfth International Conference on Quality of Multimedia Experience (QoMEX). Athlone, Ireland: IEEE, 2020, p. 1-6. [Online]. Available: https://ieeexplore.ieee.org/document/9123074/
  • [41] A. Hulus, “The evolution and future of video games in higher education: Critical perspectives and research directions,” Research in Education, p. 00345237251342128, 2024.
  • [42] D. Li, T. Jiang, and M. Jiang, “Quality Assessment of In-the-Wild Videos,” in Proceedings of the 27th ACM International Conference on Multimedia, ser. MM ’19. ACM, 2019. [Online]. Available: http://dx.doi.org/10.1145/3343031.3351028
  • [43] M. A. Gracheva, V. P. Bozhkova, A. A. Kazakova, I. P. Nikolaev, and G. I. Rozhkova, “Subjective assessment of the quality of static and video images from mobile phones,” in International Conference on Machine Vision, 2020. [Online]. Available: https://api.semanticscholar.org/CorpusID:211554205
  • [44] F. Wang, F. Wang, J. Liu, R. Shea, and L. Sun, “Intelligent Video Caching at Network Edge: A Multi-Agent Deep Reinforcement Learning Approach,” in IEEE INFOCOM 2020 - IEEE Conference on Computer Communications, 2020, pp. 2499-2508.
  • [45] X. Min, G. Zhai, J. Zhou, M. C. Q. Farias, and A. C. Bovik, “Study of Subjective and Objective Quality Assessment of Audio-Visual Signals,” IEEE Transactions on Image Processing, vol. 29, pp. 6054-6068, 2020. [Online]. Available: https://api.semanticscholar.org/CorpusID:216110143
  • [46] A. Ahar, T. Birnbaum, M. Chlipala, W. Zaperty, S. Mahmoudpour, T. Kozacki, M. Kujawinska, and P. Schelkens, “Comprehensive performance analysis of objective quality metrics for digital holography,” Signal Processing: Image Communication, vol. 97, p. 116361, 2021. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0923596521001661
  • [47] X. Jiang, L. Shen, Q. Ding, L. Zheng, and P. An, “Screen content image quality assessment based on convolutional neural networks,” Journal of Visual Communication and Image Representation, vol. 67, p. 102745, 2020. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S1047320319303669
  • [48] B. Mahesh, “Machine Learning Algorithms -A Review,” International Journal of Science and Research, vol. 9, no. 1, p. 381-386, 2020.
  • [49] I. H. Sarker, “Machine learning: Algorithms, real-world applications and research directions,” SN Computer Science, vol. 2, no. 3, 2021.
  • [50] M. S. Anwar, J. Wang, W. Khan, A. Ullah, S. Ahmad, and Z. Fei, “Subjective qoe of 360-degree virtual reality videos and machine learning predictions,” IEEE Access, vol. 8, pp. 148 084-148 099, 2020.
Uwagi
Opracowanie rekordu ze środków MNiSW, umowa nr POPUL/SP/0154/2024/02 w ramach programu "Społeczna odpowiedzialność nauki II" - moduł: Popularyzacja nauki (2025).
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-b8b26e73-053b-4314-9d64-2e6f0ae66c88
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.