PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Tytuł artykułu

Improved competitive neural network for classification of human postures based on data from RGB-D sensors postures based on data from RGB-D sensors

Treść / Zawartość
Identyfikatory
Warianty tytułu
Języki publikacji
EN
Abstrakty
EN
The cognitive goal of this paper is to assess whether marker-less motion capture systems provide sufficient data to recognize human postures in the side view. The research goal is to develop a new posture classification method that allows for analysing human activities using data recorded by RGB‐D sensors. The method is insensitive to recorded activity duration and gives satisfactory results for the sagittal plane. An improved competitive Neural Network (cNN) was used. The method of pre- processing the data is first discussed. Then, a method for classifying human postures is presented. Finally, classification quality using various distance metrics is assessed. The data sets covering the selection of human activities have been created. Postures typical for these activities have been identified using the classifying neural network. The classification quality obtained using the proposed cNN network and two other popular neural networks were compared. The results confirmed the advantage of cNN network. The developed method makes it possible to recognize human postures by observing movement in the sagittal plane.
Twórcy
  • Institute of Micromechanics and Photonics, Faculty of Mechatronics, Warsaw University of Technology, ul.Sw. Andrzeja Boboli 8, 02‐525 Warsaw, Poland
  • Institute of Automatics and Robotics, Faculty of Mechatronics, Warsaw University of Technology, ul.Sw. Andrzeja Boboli 8, 02‐525 Warsaw, Poland
  • Institute of Aeronautics and Applied Mechanics, Faculty of Power and Aeronautical Engineering, Warsaw University of Technology, ul.Nowowiejska 24, 00-665 Warsaw, Poland, www: https://ztmir.meil.pw.edu.pl/web/eng/Pracownicy/ prof.-Teresa-Zielinska
Bibliografia
  • [1] A. R. Abas. “Adaptive competitive learning neural networks”, Egyptian Informatics Journal, vol. 14, no. 3, 2013, 183–194.
  • [2] M. A. R. Ahad et al. “Action recognition using kinematics posture feature on 3d skeleton joint locations”, Pattern Recognition Letters, vol. 145, 2021, 216–224.
  • [3] P. Branco, L. Torgo, and R. P. Ribeiro. “A survey of predictive modeling on imbalanced domains”, ACM Computing Surveys (CSUR), vol. 49, no. 2, 2016, 1–50.
  • [4] G. Budura, C. Botoca, and N. Miclău. “Competitive learning algorithms for data clustering”, Facta universitatis-series: Electronics and Energetics, vol. 19, no. 2, 2006, 261–269.
  • [5] B. Cao, S. Bi, J. Zheng, and D. Yang. “Human posture recognition using skeleton and depth information”. In: 2018 WRC Symposium on Advanced Robotics and Automation (WRC SARA), vol. 1, 2018, 275–280, doi: 10.1109/WRC‐SARA.2018.8584233.
  • [6] J. Chen, P. Jönsson, M. Tamura, Z. Gu, B. Matsushita, and L. Eklundh. “A simple method for reconstructing a high‐quality ndvi time‐series data set based on the savitzky–golay filter”, Remote Sensing of Environment, vol. 91, no. 3‐4, 2004, 332–344.
  • [7] K.‐H. Chuang, M.‐J. Chiu, C.‐C. Lin, and J.‐H. Chen. “Model‐free functional mri analysis using kohonen clustering neural network and fuzzy c‐means”, IEEE Transactions on Medical Imaging, vol. 18, no. 12, 1999, 1117–1128.
  • [8] V. Dutta, and T. Zielinska. “An adversarial explainable artificial intelligence (xai) based approach for action forecasting”, Journal of Automation Mobile Robotics and Intelligent Systems, vol. 14, 2020.
  • [9] V. Dutta, and T. Zielinska. “Prognosing human activity using actions forecast and structured database”, IEEE Access, vol. 8, 2020, 6098–6116.
  • [10] V. Farrahi, M. Niemelä, M. Kangas, R. Korpelainen, and T. Jämsä. “Calibration and validation of accelerometer‐based activity monitors: A systematic review of machine‐learning approaches”, Gait and Posture, vol. 68, 2019, 285–299.
  • [11] M. Firman. “Rgbd datasets: Past, present and future”. In: IEEE Proc., vol. 1, 2016, 661–673.
  • [12] R. Hou et al. “Multi‐channel network: Constructing efficient gcn baselines for skeleton‐based action recognition”, Computers and Graphics, vol. 110, 2023, 111–117.
  • [13] W. Kasprzak, and B. Jankowski. “Light‐weight classification of human actions in video with skeleton‐based features”, Electronics, vol. 11, no. 14, 2022, 2145.
  • [14] V. Kellokumpu, M. Pietikäinen, and J. Heikkilä. “Human activity recognition using sequences of postures”. In: IAPR, Conference on Machine Vision Applications, vol. 1, no. 1, 2022, 570–573.
  • [15] F. Kruber, J. Wurst, and M. Botsch. “An unsupervised random forest clustering technique for automatic traffic scenario categorization”. In: 2018 21st International Conference on Intelligent Transportation Systems (ITSC), vol. 1, 2018, 2811–2818.
  • [16] J. Liu et al. “A graph attention spatio‐temporal convolutional network for 3d human pose estimation in video”. In: 2021 IEEE International Conference on Robotics and Automation (ICRA), vol. 1, 2021, 3374–3380.
  • [17] L. E. Ortiz, V. E. Cabrera, and L. M. Goncalves. “Depth data error modeling of the zed 3d vision sensor from stereolabs”, ELCVIA: Electronic Letters on Computer Vision and Image Analysis, vol. 17, no. 1, 2018, 1–15.
  • [18] E. J. Palomo, E. Domínguez, R. M. Luque, and J. Munoz. “A competitive neural network for intrusion detection systems”. In: International Conference on Modelling, Computation and Optimization in Information Systems and Management Sciences, vol. 1, 2008, 530–537.
  • [19] G. Rogez, P. Weinzaepfel, and C. Schmid. “Lcrnet++: Multi‐person 2d and 3d pose detection in natural images”, IEEE TPAMI, vol. 42, no. 5, 2019, 1146–1161.
  • [20] L. Romeo, R. Marani, M. Malosio, A. G. Perri, and T. D’Orazio. “Performance analysis of body tracking with the microsoft azure kinect”. In: 2021 29th Mediterranean Conference on Control and Automation (MED), vol. 1, 2021, 572–577, doi: 10.1109/MED51440.2021.9480177.
  • [21] D. E. Rumelhart, and D. Zipser. “Feature discovery by competitive learning”. In: Parallel Distributed Processing: Explorations in the Microstructure of Cognition, vol. 1: Foundations, 151–193. 1986.
  • [22] R. J. Saner, E. P. Washabaugh, and C. Krishnan. “Reliable sagittal plane kinematic gait assessments are feasible using low‐cost webcam technology”, Gait and Posture, vol. 56, 2017, 19–23.
  • [23] A. Shahroudy, J. Liu, T. Ng, and G. Wang. “Nturgb+d: A large scale dataset for 3d human activity analysis”. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), vol. 1, Los Alamitos, CA, USA, 2016, 1010–1019, doi: 10.1109/CVPR.2016.115.
  • [24] C. SL. “Task compatibility of manipulator postures”, International Journal of Robotics Research, vol. 7(5), 1988, 13–21, doi: 10.1177/027836498800700502.
  • [25] P. Tommasino, and D. Campolo. “An extended passive motion paradigm for human‐like posturę and movement planning in redundant manipulators”, Frontiers in Neurorobotics, vol. 11, 2017, doi: 10.3389/fnbot.2017.00065.
  • [26] S. Vafadar, W. Skalli, A. Bonnet‐Lebrun, M. Khalifé, M. Renaudin, A. Hamza, and L. Gajny. “A novel dataset and deep learning‐based approach for marker‐less motion capture during gait”, Gait and Posture, vol. 86, 2021, 70–76.
  • [27] X. Wang, G. Liu, Y. Feng, W. Li, J. Niu, and Z. Gan.“Measurement method of human lower limb joint range of motion through human‐machine interaction based on machine vision”, Frontiers in Neurorobotics, vol. 15, 2021.
  • [28] L.‐F. Yeung, Z. Yang, K. C.‐C. Cheng, D. Du, and R. K.‐Y. Tong. “Effects of camera viewing angles on tracking kinematic gait patterns using azure kinect, kinect v2 and orbbec astra pro v2”, Gait and Posture, vol. 87, 2021, 19–26, doi: 10.1016/j.gaitpost.2021.04.005.
  • [29] T. Zielinska, G. R. R. Coba, and W. Ge. “Variable inverted pendulum applied to humanoid motion design”, Robotica, vol. 39, no. 8, 2021, 1368–1389.
Uwagi
Opracowanie rekordu ze środków MNiSW, umowa nr SONP/SP/546092/2022 w ramach programu "Społeczna odpowiedzialność nauki" - moduł: Popularyzacja nauki i promocja sportu (2024).
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-d25cde2e-f97b-4773-9e6f-8e4442f11394
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.