PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Tytuł artykułu

Implementing visual assistant using YOLO and SSD for visually-impaired persons

Treść / Zawartość
Identyfikatory
Warianty tytułu
Języki publikacji
EN
Abstrakty
EN
Artificial Intelligence has been touted as the next big thing that is capable of altering the current landscape of the technological domain. Through the use of Artificial Intelligence and Machine Learning, pioneering work has been undertaken in the area of Visual and Object Detection. In this paper, we undertake the analysis of a Visual Assistant Application for Guiding Visually-Impaired Individuals. With recent breakthroughs in computer vision and supervised learning models, the problem at hand has been reduced significantly to the point where new models are easier to build and implement than the already existing models. Different object detection models exist now that provide object tracking and detection with great accuracy. These techniques have been widely used in automating detection tasks in different areas. A few newly discovered detection approaches, such as the YOLO (You Only Look Once) and SSD (Single Shot Detector) approaches, have proved to be consistent and quite accurate at detecting objects in real-time. This paper attempts to utilize the combination of these state-of-the-art, real-time object detection techniques to develop a good base model. This paper also implements a ’Visual Assistant’ for visually impaired people. The results obtained are improved and superior compared to existing algorithms.
Słowa kluczowe
EN
Twórcy
  • Medi-Caps University, Indore, India
  • Medi-Caps University, Indore, India
  • Medi-Caps University, Indore, India
autor
  • Medi-Caps University, Indore, India
  • Medi-Caps University, Indore, India
autor
  • Medi-Caps University, Indore, India
Bibliografia
  • [1] World Health Organization. “Assistive technology,” WHO, 2018. Assistive technology (accessed Mar. 20, 2022).
  • [2] World Health Organization. “Blindness and vision impairment,” WHO, 2021.https://www.who.int/news-room/fact-sheets/detail/blindness-and-visual-impairment (accessed Feb. 20, 2022).
  • [3] L. S. Ambati, O. F. El-Gayar, and N. Nawar. “Influence of the digital divide and socio-economic factors on prevalence of diabetes,” Issues Inf. Syst.,vol. 21, no. 4, 2020, pp. 103-113, 2020. doi:10.48009/4_iis_2020_103-113.
  • [4] C. Guo et al. “Prevalence, causes and social factors of visual impairment among Chinese adults: based on a national survey,” Int. J. Environ. Res. Public Health, vol. 14, no. 9, 2017, p. 1034. doi: 10.3390/ijerph14091034.
  • [5] C. Albus. “Psychological and social factors in coronary heart disease,” Ann. Med., vol. 42, no. 7, 2010, pp. 487-494. doi: 10.3109/07853890.2010.515605.
  • [6] O. F. El-Gayar, L. S. Ambati, and N. Nawar.“Wearables, artificial intelligence, and the future of healthcare,” 2020, pp. 104-129. doi: 10.4018/978-1-5225-9687-5.ch005.
  • [7] P. Pandey and R. Litoriya. “An activity vigilance system for elderly based on fuzzy probability transformations,” J. Intell. Fuzzy Syst., vol. 36, no. 3, 2019, pp. 2481-2494. doi: 10.3233/JIFS-181146.
  • [8] P. Pandey and R. Litoriya. “Ensuring elderly well being during COVID-19 by using IoT,” Disaster Med. Public Health Prep., vol. 16, no. 2, 2020, pp. 763-766. doi: 10.1017/dmp.2020.390.
  • [9] L. S. Ambati, O. F. El-Gayar, and N. Nawar. “Design principles for multiple Sclerosis mobile self-management applications: A patient-centric perspective,” 2021.
  • [10] Z. Zou et al. “Object detection in 20 years: A survey,” 2019. http://arxiv.org/abs/1905.05055.
  • [11] L. S. Ambati and O. F. El-Gayar. “Human activity recognition: A comparison of machine learning approaches,”J. Midwest Assoc. Inf. Syst., vol. 2021, no. 1, 2021. doi: 10.17705/3jmwa.000065.
  • [12] V. Iyer et al. “Virtual assistant for the visually impaired,” 2020 5th International Conference on Communication and Electronics Systems (ICCES), 2020, pp. 1057-1062. doi: 10.1109/ICCES48766.2020.9137874.
  • [13] R. Saffoury et al. “Blind path obstacle detector using smartphone camera and line laser emitter,” 2016 1st International Conference on Technology and Innovation in Sports, Health and Wellbeing (TISHW), 2016, pp. 1-7. doi: 10.1109/TISHW.2016.7847770.
  • [14] A. Mohanta et al. “Application for the visually impaired people with voice assistant,” Int. J. Innov. Technol. Explor. Eng., vol. 9, no. 6, 2020, pp. 495-497. doi: 10.35940/ijitee.F3789.049620.
  • [15] V. Sharma, V. M. Singh, and S. Thanneeru. “Virtual assistant for visually impaired,” SSRN Electron. J., 2020. doi: 10.2139/ssrn.3580035.
  • [16] A. M. Weeratunga et al. “Project Nethra – an intelligent assistant for the visually disabled to interact with internet services,” 2015 IEEE 10th International Conference on Industrial and Information Systems (ICIIS), 2015, pp. 55-59. doi: 10.1109/ICIINFS.2015.7398985.
  • [17] N. Kumaran et al. “Intelligent personal assistant - implementing voice commands enabling speech recognition,” 2020 International Conference on System, Computation, Automation and Networking (ICSCAN), 2020, pp. 1-5. doi: 10.1109/ICSCAN49426.2020.9262279.
  • [18] V. Kepuska and G. Bohouta. “Next-generation of virtual personal assistants (Microsoft Cortana, Apple Siri, Amazon Alexa and Google Home),” 2018 IEEE 8th Annual Computing and Communication Workshop and Conference (CCWC), 2018, pp. 99-103. doi: 10.1109/CCWC.2018.8301638.
  • [19] G. Iannizzotto et al. “A vision and speech enabled, customizable, virtual assistant for smart environments,” 2018 11th International Conference on Human System Interaction (HSI), 2018, pp. 50-56. doi: 10.1109/HSI.2018.8431232.
  • [20] R. G. Praveen and R. P. Paily. “Blind navigation assistance for visually impaired based on local depth hypothesis from a single image,”Procedia Eng., vol. 64, 2013, pp. 351-360. doi: 10.1016/j.proeng.2013.09.107.
  • [21] M. W. Rahman et al. “The architectural design of smart blind assistant using IoT with deep learning paradigm,” Internet of Things, vol. 13, 2021, p. 100344. doi: 10.1016/j.iot.2020.100344.
  • [22] J. Redmon et al. “You only look once: unified, real-time object detection,”2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 779-788. doi: 10.1109/CVPR.2016.91.
  • [23] J.-M. Perez-Rua et al. “Incremental few-shot object detection,” 2020. http://arxiv.org/abs/2003.04668.
  • [24] T.-Y. Lin et al. “Microsoft COCO: common objects in context,” 2014, pp. 740-755. doi: 10.1007/978-3-319-10602-1_48.
  • [25] J. Redmon and A. Farhadi. “YOLOv3: An incremental improvement,” 2018. doi: arXiv: 1804.02767.
Uwagi
Opracowanie rekordu ze środków MNiSW, umowa nr SONP/SP/546092/2022 w ramach programu "Społeczna odpowiedzialność nauki" - moduł: Popularyzacja nauki i promocja sportu (2024).
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-03a0e5a1-c36d-474a-9c4c-b38f53e0edaa
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.