PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Powiadomienia systemowe
  • Sesja wygasła!
Tytuł artykułu

An improved human pose estimation using Deep Neural Network for the optimization of human-robot interactions

Autorzy
Treść / Zawartość
Identyfikatory
Warianty tytułu
Języki publikacji
EN
Abstrakty
EN
Research shows that mobile support robots are becoming increasingly valuable in various situations, such as monitoring daily activities, providing medical services, and supporting elderly people. For interpreting human conduct and intention, these robots largely depend on human activity recognition (HAR). However, previous awareness of human appearance (human recognition) and recognition of humans for monitoring (human surveillance) are necessary to enable HAR to work with assistance robots. Al-so However, multimodal human behavior recognition is constrained by costly hardware and a rigorous setting, making it challenging to effectively balance inference accuracy and system expense. Naturally, a key problem in human pose or behavior detection is the ability to extract additional purposeful interpretations from easily accessible live videos. In this paper, we employ human pose detection to address the problem and provide well-crafted assessment measures to show demonstrate the effectiveness of our approach, which utilizes deep neural networks (DNNs) This article proposes a human intention detection system that anticipates human intentions in human- and robot-centered scenarios by utilizing the incorporation of visual information as well as input features, including human positions, head orientations, and critical skeletal key points. Our goal is to aid human-robot interactions by helping mobile robots through real-time human pose prediction using the recognition of 18 distinct key points in the body's structure. The effectiveness of this strategy is demonstrated by the suggested study using Python, and the results of simulations verify the reliability and accuracy of this method.
Twórcy
autor
  • Universitat Politècnica de Catalunya (UPC), Barcelona, Spain
autor
  • AGH University of Krakow, Krakow, Poland
Bibliografia
  • [1] T. B. Sheridan, “Human-robot interaction: status and challenges,” Human Factors: The Journal of the Human Factors and Ergonomics Society, vol. 58, issue 4, 2016, pp. 525-532. https://doi.org/10.1177/0018720816644
  • [2] R. Raj and A. Kos, “Discussion on different controllers used for the navigation of mobile robot,” International Journal of Electronics and Telecommunications, vol. 70, issue 1, 2024, pp. 229-239. https://doi.org/10.24425/ijet.2024.149535
  • [3] R. Raj and A. Kos, “Intelligent mobile robot navigation in unknown and complex environment using reinforcement learning technique,” Scientific Reports, vol. 14, pp. 22852. https://doi.org/10.1038/s41598-024-72857-3
  • [4] R. Raj and A. Kos, “An Optimized Energy and Time Constraints-Based Path Planning for the Navigation of Mobile Robots Using an Intelligent Particle Swarm Optimization Technique,” Applied Sciences, vol. 13, no. 17, p. 9667, Aug. 2023. https://doi.org/10.3390/app13179667
  • [5] R. Raj and A. Kos, “An Extensive Study of Convolutional Neural Networks: Applications in Computer Vision for Improved Robotics Perceptions,” Sensors, vol. 25, no. 4, pp. 1033, 2025. https://doi.org/10.3390/s25041033
  • [6] A. Toshev and C. Szegedy, "DeepPose: Human Pose Estimation via Deep Neural Networks," 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 2014, pp. 1653-1660. https://doi.org/10.1109/CVPR.2014.214
  • [7] K. Sun, B. Xiao, D. Liu and J. Wang, "Deep High-Resolution Representation Learning for Human Pose Estimation," 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 2019, pp. 5686-5696. https://doi.org/10.1109/CVPR.2019.00584
  • [8] C. Dong and G. Du, "An enhanced real-time human pose estimation method based on modified YOLOv8 framework," Scientific Reports, vol. 14, 2024, pp. 8012. https://doi.org/10.1038/s41598-024-58146-z
  • [9] Y. -L. Hsu, H. -C. Chang and Y. -J. Chiu, "Wearable Sport Activity Classification Based on Deep Convolutional Neural Network," in IEEE Access, vol. 7, pp. 170199-170212, 2019. https://doi.org/10.1109/ACCESS.2019.2955545
  • [10] T. Sharma, B. Debaque, N. Duclos, A. Chehri, B. Kinder, and P. Fortier, "Deep Learning-Based Object Detection and Scene Perception under Bad Weather Conditions," Electronics, vol. 11, pp. 563, 2022. https://doi.org/10.3390/electronics11040563
  • [11] R. Raj and A. Kos, "Study of Human-Robot Interactions for Assistive Robots Using Machine Learning and Sensor Fusion Technologies," Electronics, vol. 13, 3285, 2024. https://doi.org/10.3390/electronics13163285
  • [12] Z. Liu, X. Lu, W. Liu, W. Qi and H. Su, "Human-Robot Collaboration Through a Multi-Scale Graph Convolution Neural Network With Temporal Attention," in IEEE Robotics and Automation Letters, vol. 9, no. 3, pp. 2248-2255, March 2024, https://doi.org/10.1109/LRA.2024.3355752
  • [13] A. Billard and D. Kragic, "Trends and challenges in robot manipulation," Science, vol. 364, no. 6446, 2019. https://doi.org/10.1126/science.aat8414
  • [14] C. K. Lakde and P. S. Prasad, "Navigation system for visually impaired people," 2015 International Conference on Computation of Power, Energy, Information and Communication (ICCPEIC), Melmaruvathur, India, 2015, pp. 0093-0098, https://doi.org/10.1109/ICCPEIC.2015.7259447
  • [15] S. Kumar KN, R. Sathish, S. Vinayak and T. Parasad Pandit, "Braille Assistance System for Visually Impaired, Blind & Deaf-Mute people in Indoor & Outdoor Application," 2019 4th International Conference on Recent Trends on Electronics, Information, Communication & Technology (RTEICT), Bangalore, India, 2019, pp. 1505-1509, https://doi.org/10.1109/RTEICT46194.2019.9016765
  • [16] A. Thomaz, G. Hoffman, and M. Cakmak, "Computational Human-Robot Interaction," Foundations and Trends in Robotics, vol. 4, no. 2-3, pp. 104-223, 2016. https://doi.org/10.1561/2300000049
  • [17] T. Fong, I. Nourbakhsh, and K. Dautenhahn, "A survey of socially interactive robots," Robotics and Autonomous Systems, vol. 42, no. 3-4, pp. 143-166, 2003. https://doi.org/10.1016/S0921-8890(02)00372-X
  • [18] J. Fasola and M. J. Mataric, "Using Socially Assistive Human-Robot Interaction to Motivate Physical Exercise for Older Adults," in Proceedings of the IEEE, vol. 100, no. 8, pp. 2512-2526, Aug. 2012, https://doi.org/10.1109/JPROC.2012.2200539
  • [19] Y. Cheng, P. Yi, R. Liu, J. Dong, D. Zhou and Q. Zhang, "Human-robot Interaction Method Combining Human Pose Estimation and Motion Intention Recognition," 2021 IEEE 24th International Conference on Computer Supported Cooperative Work in Design (CSCWD), Dalian, China, 2021, pp. 958-963, https://doi.org/10.1109/CSCWD49262.2021.9437772
  • [20] K. Ashley, R. Alqasemi and R. Dubey, "Robotic assistance for performing vocational rehabilitation activities using BaxBot," 2017 International Conference on Rehabilitation Robotics (ICORR), London, UK, 2017, pp. 977-982, https://doi.org/10.1109/ICORR.2017.8009376
  • [21] R. Raj and A. Kos, "Learning the Dynamics of Human Patterns for Autonomous Navigation," 2024 IEEE 18th International Conference on Compatibility, Power Electronics and Power Engineering (CPE-POWERENG), Gdynia, Poland, 2024, pp. 1-6, https://doi.org/10.1109/CPE-POWERENG60842.2024.10604363
  • [22] C. Ionescu, D. Papava, V. Olaru and C. Sminchisescu, "Human3.6M: Large Scale Datasets and Predictive Methods for 3D Human Sensing in Natural Environments," in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 36, no. 7, pp. 1325-1339, July 2014, https://doi.org/10.1109/TPAMI.2013.248
  • [23] N. A. Choudhury and B. Soni, "Enhanced Complex Human Activity Recognition System: A Proficient Deep Learning Framework Exploiting Physiological Sensors and Feature Learning," in IEEE Sensors Letters, vol. 7, no. 11, pp. 1-4, Nov. 2023, Art no. 6008104, https://doi.org/10.1109/LSENS.2023.3326126
  • [24] U. E. Ogenyi, J. Liu, C. Yang, Z. Ju and H. Liu, "Physical Human-Robot Collaboration: Robotic Systems, Learning Methods, Collaborative Strategies, Sensors, and Actuators," in IEEE Transactions on Cybernetics, vol. 51, no. 4, pp. 1888-1901, April 2021, https://doi.org/10.1109/TCYB.2019.2947532
  • [25] C. Zhu and W. Sheng, "Wearable Sensor-Based Hand Gesture and Daily Activity Recognition for Robot-Assisted Living," in IEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and Humans, vol. 41, no. 3, pp. 569-573, May 2011, https://doi.org/10.1109/TSMCA.2010.2093883
  • [26] C. Xu, X. Yu, Z. Wang and L. Ou, "Multi-View Human Pose Estimation in Human-Robot Interaction," IECON 2020 The 46th Annual Conference of the IEEE Industrial Electronics Society, Singapore, 2020, pp. 4769-4775, https://doi.org/10.1109/IECON43393.2020.9255211
  • [27] Z. Cao, G. Hidalgo, T. Simon, S. -E. Wei and Y. Sheikh, "OpenPose: Realtime Multi-Person 2D Pose Estimation Using Part Affinity Fields," in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 43, no. 1, pp. 172-186, 1 Jan. 2021, https://doi.org/10.1109/TPAMI.2019.2929257
  • [28] H. Cao, L. Dirnberger, D. Bernardini, C. Piazza, and M. Caccamo, "6IMPOSE: bridging the reality gap in 6D pose estimation for robotic grasping," Frontiers in Robotics and AI, vol. 10, 2023. https://doi.org/10.3389/frobt.2023.1176492
  • [29] R. Huo, Q. Gao, J. Qi, and Z. Ju, "3D Human Pose Estimation in Video for Human-Computer/Robot Interaction," In: H. Yang, et al., Intelligent Robotics and Applications. ICIRA 2023. Lecture Notes in Computer Science, Springer, Singapore, 14273, 2023. https://doi.org/10.1007/978-981-99-6498-7_16
  • [30] Md. A.-A. Bhuiyan, C. H. Liu, and H. Ueno, "On Pose Estimation for Human-Robot Symbiosis," International Journal of Advanced Robotic Systems, vol. 5, 2008. https://doi.org/10.5772/5663
  • [31] A. Amorim, D. Guimares, T. Mendona, P. Neto, P. Costa, and A. P. Moreira, "Robust human position estimation in cooperative robotic cells," Robotics and Computer-Integrated Manufacturing, vol. 67, no. 102035, 2021. https://doi.org/10.1016/j.rcim.2020.102035
  • [32] M. Lombardi, E. Maiettini, D. De Tommaso, A. Wykowska, and L. Natale, "Toward an Attentive Robotic Architecture: Learning-Based Mutual Gaze Estimation in Human-Robot Interaction," Frontiers in Robotics and AI, vol. 9, 2022. https://doi.org/10.3389/frobt.2022.770165
  • [33] S. Saadatnejad et al., "Toward Reliable Human Pose Forecasting With Uncertainty," in IEEE Robotics and Automation Letters, vol. 9, no. 5, pp. 4447-4454, May 2024, https://doi.org/10.1109/LRA.2024.3374188
  • [34] J. Fan, P. Zheng, S. Li and L. Wang, "An Integrated Hand-Object Dense Pose Estimation Approach With Explicit Occlusion Awareness for Human-Robot Collaborative Disassembly," in IEEE Transactions on Automation Science and Engineering, vol. 21, no. 1, pp. 147-156, Jan. 2024, https://doi.org/10.1109/TASE.2022.3215584
  • [35] S. Yang, W. D. Kim, H. Park, S. Min, H. Han and J. Kim, "In-Hand Object Classification and Pose Estimation With Sim-to-Real Tactile Transfer for Robotic Manipulation," in IEEE Robotics and Automation Letters, vol. 9, no. 1, pp. 659-666, Jan. 2024, https://doi.org/10.1109/LRA.2023.3334971
  • [36] M. Salimi, J. J. M. Machado, and J. M. R. S. Tavares, "Using Deep Neural Networks for Human Fall Detection Based on Pose Estimation," Sensors, vol. 22, no. 4544, 2022. https://doi.org/10.3390/s22124544
  • [37] G. Lan, Y. Wu, F. Hu and Q. Hao, "Vision-Based Human Pose Estimation via Deep Learning: A Survey," in IEEE Transactions on Human-Machine Systems, vol. 53, no. 1, pp. 253-268, Feb. 2023, https://doi.org/10.1109/THMS.2022.3219242
  • [38] A. Simoni, G. Borghi, L. Garattoni, G. Francesca and R. Vezzani, "D-SPDH: Improving 3D Robot Pose Estimation in Sim2Real Scenario via Depth Data," in IEEE Access, vol. 12, pp. 166660-166673, 2024, https://doi.org/10.1109/ACCESS.2024.3492812
  • [39] C. Zimmermann, T. Welschehold, C. Dornhege, W. Burgard and T. Brox, "3D Human Pose Estimation in RGBD Images for Robotic Task Learning," 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, QLD, Australia, 2018, pp. 1986-1992, https://doi.org/10.1109/ICRA.2018.8462833
  • [40] T. A. J. Schoonderwoerd, W. Jorritsma, M. A. Neerincx, and K. Van Den Bosch, "Human-Centered XAI: Developing Design Patterns for Explanations of Clinical Decision Support Systems," International Journal of Human-Computer Studies, vol. 154, no. 102684, 2021. https://doi.org/10.1016/j.ijhcs.2021.102684
  • [41] I. Grishchenko, et al., "BlazePose GHUM Holistic: Real-time 3D human landmarks and pose estimation," 2022. https://doi.org/10.48550/arXiv.2206.11678
  • [42] T. Salzmann, H. -T. L. Chiang, M. Ryll, D. Sadigh, C. Parada and A. Bewley, "Robots That Can See: Leveraging Human Pose for Trajectory Prediction," in IEEE Robotics and Automation Letters, vol. 8, no. 11, pp. 7090-7097, Nov. 2023, https://doi.org/10.1109/LRA.2023.3312035
  • [43] S. Mroz et al., "Comparing the Quality of Human Pose Estimation with BlazePose or OpenPose," 2021 4th International Conference on Bio-Engineering for Smart Technologies (BioSMART), Paris / Créteil, France, 2021, pp. 1-4, https://doi.org/10.1109/BioSMART54244.2021.9677850
  • [44] A. Vaswani, "Attention Is All You Need," 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA, 2017. https://doi.org/10.48550/arXiv.1706.03762
  • [45] Y. Zhang, G. Tian and X. Shao, "Safe and Efficient Robot Manipulation: Task-Oriented Environment Modeling and Object Pose Estimation," in IEEE Transactions on Instrumentation and Measurement, vol. 70, pp. 1-12, 2021, Art no. 7502412, https://doi.org/10.1109/TIM.2021.3071222
  • [46] S. Juraev, A. Ghimire, J. Alikhanov, V. Kakani and H. Kim, "Exploring Human Pose Estimation and the Usage of Synthetic Data for Elderly Fall Detection in Real-World Surveillance," in IEEE Access, vol. 10, pp. 94249-94261, 2022, https://doi.org/10.1109/ACCESS.2022.3203174
  • [47] S. Li, K. Milligan, P. Blythe, et al., "Exploring the role of human-following robots in supporting the mobility and wellbeing of older people," Scientific Reports, vol. 13, no. 6512, 2023. https://doi.org/10.1038/s41598-023-33837-1
Uwagi
This work was supported financially by the AGH University of Science and Technology, Krakow, Poland, under subvention no. 16.16.230.434.
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-62eec0e0-3cc6-420e-9238-0c817976fb51
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.