Tytuł artykułu
Wybrane pełne teksty z tego czasopisma
Identyfikatory
DOI
Warianty tytułu
Konferencja
Federated Conference on Computer Science and Information Systems (14 ; 01-04.09.2019 ; Leipzig, Germany)
Języki publikacji
Abstrakty
Human Activity Recognition (HAR) is an important area of research in ambient intelligence for various contexts such as ambient-assisted living. The existing HAR approaches are mostly based either on vision, mobile or wearable sensors. In this paper, we propose a hybrid approach for HAR by combining three types of sensing technologies, namely: smartphone accelerometer, RGB cameras and ambient sensors. Acceleration and video streams are analyzed using multiclass Support Vector Machine (SVM) and Convolutional Neural Networks, respectively. Such an analysis is improved with the ambient sensing data to assign semantics to human activities using description logic rules. For integration, we design and implement a Framework to address human activity recognition pipeline from the data collection phase until activity recognition and visualization. The various use cases and performance evaluations of the proposed approach show clearly its utility and efficiency in several everyday scenarios.
Słowa kluczowe
Rocznik
Tom
Strony
101--105
Opis fizyczny
Bibliogr. 22 poz., tab., il.
Twórcy
autor
- Military Polytechnic School, PO BOX 17, Bordj-El-Bahri, 16111, Algiers, Algeria
autor
- Military Polytechnic School, PO BOX 17, Bordj-El-Bahri, 16111, Algiers, Algeria
autor
- Military Polytechnic School, PO BOX 17, Bordj-El-Bahri, 16111, Algiers, Algeria
autor
- Military Polytechnic School, PO BOX 17, Bordj-El-Bahri, 16111, Algiers, Algeria
autor
- Military Polytechnic School, PO BOX 17, Bordj-El-Bahri, 16111, Algiers, Algeria
Bibliografia
- 1. P . Suresh, J. V. Daniel, V. P arthasarathy, and R. H. Aswathy, “ A state of the art review on the Internet of Things ( IoT) history, technology and fields of deployment,” in P roc. IEEE Int. Conf. Sci., Eng. Manage. Res. (ICSEMR’ 14), Chennai, India, Nov. 2014, pp. 1-8.
- 2. O. Lara, M. Labrador, “ A survey on human activity recognition using wearable sensors”, IEEE Commun. Surv. Tutor. 1 (2012) 1–18.
- 3. H.F. Nweke, Y.W. Teh, M.A. Al-garadi, and U.R. Alo, “ Deep learning algorithms for human activity recognition using mobile and wearable sensor networks: Stateof the art and research challenges” Expert Systems With Applications 105 (2018).
- 4. M. M. Hassan, M. Zia Uddin, A. Mohamed, and A. Almogren: “ A robust human activity recognition system using smartphone sensors and deep learning “ in Future Generation Computer Systems 81 (2018) 307–313.
- 5. O. C. Ann, L.B. Theng, “ Human activity recognition: A review” in 2014 IEEE International Conference on Control System, Computing and Engineering.
- 6. D. D. Dawn and S. H.Shaikh, “ A comprehensive survey of human action recognition with spatio-temporal interest point ( stip) detector ," The Visual Computer, vol. 32, no. 3, pp. 289-306, 2016.
- 7. A. Karpathy, G. Toderici, S. Shetty, T. Leung, R. Sukthankar, and L. Fei -Fei, “ Largescale video classification with convolutional neural networks ," in P roceedings of the IEEE conference on Computer Vision and P attern Recognition, 2014, pp. 1725-1732.
- 8. K. Simonyan and A. Zisserman, “ Two-stream convolutional networks for action recognition in videos,"in Advances in neural information processing systems,20 14, pp. 568-576.
- 9. S. Yousfi, “ Embedded Arabic text detection and recognition in videos ," P h.D. dissertation, Lyon University, 2016.
- 10. C. A. Ronao, S-B. Cho “ Human activity recognition with smartphone sensors using deep learning neural networks” Expert Systems With Applications 59 (2016).
- 11. Y. Kwon, K. Kang, and C. Bae, “ Unsupervised learning for human activity recognition using smartphone sensors,"Expert Systems with Applications,vol. 41, no. 14, pp. 6067-6074, 2014.
- 12. F. Massé, R. R. Gonzenbach, A. Arami, A. P araschiv-Ionescu, A. R. Luft, and K. Aminian, “ Improving activity recognition using a wearable barometric pressure sensor in mobility-impaired stroke patients," Journal of neuroengineering and rehabilitation, vol. 12, no. 1, p. 72, 2015.
- 13. E. Garcia-Ceja, R. F. Brena, J. C. Carrasco-Jimenez, and L. Garrido, “ Long-term activity recognition from wristwatch accelerometer data ," Sensors,vol. 14, no. 12, pp. 22 500-22 524, 2014.
- 14. S. Dernbach, B. Das, N. C. Krishnan, B. L. Thomas, and D. J. Cook, “ Simple and complex activity recognition through smart phones ," in Intelligent Environments (IE), 2012, pp. 214-221.
- 15. F. Chamroukhi, S. Mohammed, D. Trabelsi, L. Oukhellou, and Y. Amirat, “ Joint segmentation of multivariate time series with hidden process regression for human activity recognition," Neurocomputing, vol. 120, pp. 633-644, 2013.
- 16. A. Bayat, M. P omplun, and D. A. Tran, “ A study on human activity recognition using accelerometer data from smartphones," P rocedia Computer Science, vol. 34, pp. 450-457, 2014.
- 17. A. Moncada-Torres, K. Leuenberger, R. Gonzenbach, A. Luft, and R. Gassert, “ Activity classification based on inertial and barometric pressure sensors at different anatomical locations," P hysiological measurement, vol. 35, no. 7, 2014.
- 18. A. M. Khan, Y.-K. Lee, and T.-S.Kim, “ Accelerometer signal-based human activity recognition using augmented autoregressive model coefficients and artifcialneural nets," in Engineering in Medicine and Biology Society, 2008. EMBS 2008. 30th Annual International Conference of the IEEE. IE EE, 2008, pp 5172-5175
- 19. Y. Cai and X. Tan, “ W eakly supervised human body detection under arbitrary poses," in Image P rocessing (ICIP ), 2016, pp. 599-603.
- 20. W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C.-Y. Fu, and A. C. Berg, “ Ssd : Single shot multibox detector," in European conference on computer vision. Springer, 2016, pp. 21-37.
- 21. I. Goodfellow, Y. Bengio, and A. Courville, “ Deep Learning”. MIT P ress, 2016.
- 22. S. Ruder, “ An overview of gradient descent optimization algorithms," CoRR, vol. abs/1609.04747, 2016.
Uwagi
1. Track 1: Artificial Intelligence and Applications
2. Technical Session: 14th International Symposium Advances in Artificial Intelligence and Applications
3. Opracowanie rekordu ze środków MNiSW, umowa Nr 461252 w ramach programu "Społeczna odpowiedzialność nauki" - moduł: Popularyzacja nauki i promocja sportu (2020).
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-099ad7af-e271-4bda-bb9e-2bf11b3a68e4