PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Tytuł artykułu

Evaluation of Gesture Description Language in the role of touchless interface for virtual reality environment

Wybrane pełne teksty z tego czasopisma
Identyfikatory
Warianty tytułu
Ocena działania Języka Opisu Gestów w roli bezdotykowego interfejsu w środowisku wirtualnej rzeczywistości
Języki publikacji
EN
Abstrakty
EN
The main novelty presented in this paper is application and evaluation our Gesture Description Language (GDL) classifier in the role of touchless interface for virtual reality (VR) environment. In our VR system whole interaction is done by gestures and body movements analysis (so called natural user interface). For our needs we have adapted semi-realistic VR system (block engine). We have tested different aspects of proposed interface on a group of 26 persons with wide range of age (from 5 years to 40+) and both sexes. The results we obtained prove that GDL can be successfully applied in systems that require real time action recognition especially educational software games that aim at increasing the students’ motivation and engagement while they learn.
PL
W tym artykule autorzy prezentują zastosowanie klasyfikatora o nazwie Język Opisu Gestów (GDL) w roli bezdotykowego interfejsu systemu wirtualnej rzeczywistości. Całość interakcji z zaprezentowanym w tej pracy systemem odbywa się poprzez rozpoznanie i analizę ruchu użytkownika (jest to tak zwany interfejs naturalny). Klasyfikator GDL został przetestowany w pseudo – realistycznym, wirtualnym środowisku, na grupie 26 osób obu płci w przedziale wiekowym od 5 do 40+ lat. Otrzymane rezultaty dowodzą, że zaproponowane podejście może być z powodzeniem użyte w aplikacjach wykorzystujących wirtualną rzeczywistość, w szczególności w grach edukacyjnych, których celem jest uatrakcyjnienie procesu zdobywania wiedzy.
Rocznik
Strony
57--66
Opis fizyczny
Bibliogr. 42 poz., rys., tab., wykr.
Twórcy
autor
  • Pedagogical University of Krakow, Institute of Computer Science and Computer Methods, 2 Podchorazych Ave, 30-084 Krakow, Poland
autor
  • AGH University of Science and Technology, Cryptography and Cognitive Informatics Research Group, 30 Mickiewicza Ave, 30-059 Krakow, Poland
  • Pedagogical University of Krakow, Institute of Special Needs Education, 30-060 Kraków, R. Ingardena 4, 30-060, Krakow
Bibliografia
  • [1] Virtual Realities, Dagstuhl Seminar 2008, Editors: Guido Brunnett, Sabine Coquillart, Greg Welch, Springer Vienna, 2011, DOI 10.1007/978-3-211-99178-7
  • [2] Zhou N-N, Deng Y-L, Virtual Reality: A State-of-the-Art Survey,International Journal of Automation and Computing 6(4), November 2009, 319-325, DOI: 10.1007/s11633-009-0319-9
  • [3] Houliez C., Gamble E., Dwelling in Second Life? A phenomenological evaluation of online virtual worlds, Virtual Reality (2013) 17:263–278, DOI 10.1007/s10055-012-0218-1
  • [4] Zhao QP, A survey on virtual reality, Science in China Series F: Information Sciences, March 2009, Volume 52, Issue 3, pp 348-400
  • [5] García-Rojas A., Gutiérrez M., Thalmann D., Visual creation of inhabited 3D environments An ontology-based approach, The Visual Computer, July 2008, Volume 24, Issue 7-9, pp 719-726
  • [6] Sobota B., Korečko Š., Hrozek F., On building an objectoriented parallel virtual reality system, Central European Journal of Computer Science, October 2012, Volume 2, Issue 3, pp 261-271
  • [7] Noguera J. M., Torres J. C., Interaction and visualization of 3D virtual environments on mobile devices, Personal and Ubiquitous Computing, October 2013, Volume 17, Issue 7, pp 1485-1486
  • [8] Piórkowski A., Jajesnica Ł., Szostek K., Creating 3D Web- Based Viewing Services for DICOM Images, Computer Networks, Communications in Computer and Information Science Volume 39, 2009, pp 218-224
  • [9] Kahn S., Reducing the gap between Augmented Reality and 3D modeling with real-time depth imaging, Virtual Reality (2013) 17:111–123, DOI 10.1007/s10055-011-0203-0
  • [10] Ingrassia T., Cappello F., VirDe: a new virtual reality design approach, International Journal on Interactive Design and Manufacturing (IJIDeM), February 2009, Volume 3, Issue 1, pp 1-11DOI 10.1007/s12008-008-0056-2
  • [11] Hrozek F., Sobota B., Szabó C., Digital preservation of historical buildings using virtual reality technologies, Central European Journal of Computer Science, October 2012, Volume 2, Issue 3, pp 272-282
  • [12] Hachaj T., Ogiela M. R., Rule-based approach to recognizing human body poses and gestures in real time, Multimedia Systems, February 2014, Volume 20, Issue 1, pp 81-99, (DOI) 10.1007/s00530-013-0332-2
  • [13] Lee M. W., Lee J. M., Generation and Control of Game Virtual Environment, International Journal of Automation and Computing 04(1), January 2007, 25-29, DOI: 10.1007/s11633- 007-0025-4
  • [14] Sherstyuk A., Jay C., Treskunov A., Impact of hand-assisted viewing on user performance and learning patterns in virtual environments, Vis Comput (2011) 27: 173–185, DOI 10.1007/s00371-010-0516-0
  • [15] Middleton K. K., Hamilton T., Tsai P-C, Middleton D. B., Falcone J. L., Hamad G., Improved nondominant hand performance on a laparoscopic virtual reality simulator after playing the Nintendo Wii, Surgical Endoscopy, November 2013, Volume 27, Issue 11, pp 4224-4231
  • [16] Ruppert G. C., Reis L. O., P. H. Amorim, T. F. de Moraes, J. V. da Silva, Touchless gesture user interface for interactive image visualization in urological surgery, World J Urol. 2012 Oct;30(5), doi: 10.1007/s00345-012-0879-0. Epub 2012 May 12, pp. 687-91
  • [17] Hachaj T., Ogiela M. R., Piekarczyk M., Real-Time Recognition of Selected Karate Techniques Using GDL Approach, Image Processing and Communications Challenges 5, Advances in Intelligent Systems and Computing, Volume 233, 2014, pp 99- 106
  • [18] Clark R. A., Pua Y-H, Fortin K., Ritchie C., Webster K. E., Denehy L., Bryant A. L., Validity of the Microsoft Kinect for assessment of postural control, Gait & Posture 36 (2012), pp. 372–377
  • [19] Chang Y-J., Chen S-F., Huang J-D., A Kinect-based system for physical rehabilitation: A pilot study for young adults with motor disabilities, Research in Developmental Disabilities 32 (2011) pp. 2566–2570
  • [20] Webster D., Celik O., Systematic review of Kinect applications in elderly care and stroke rehabilitation, Webster and Celik Journal of NeuroEngineering and Rehabilitation 2014 Jul 3;11:108. doi: 10.1186/1743-0003-11-108
  • [21] Mastorakis G., Makris D., Fall detection system using Kinect’s infrared sensor, Journal of Real-Time Image Processing, December 2014, Volume 9, Issue 4, pp 635-646, doi: 10.1007/s11554-012-0246-9
  • [22] Planinc R., Kampel M., Introducing the use of depth data for fall detection, Personal and Ubiquitous Computing, August 2013, Volume 17, Issue 6, pp 1063-1072 , doi: 10.1007/s00779-012-0552-z
  • [23] Stevenson Won A., Bailenson J. N., Stathatos S. C., Dai W., Automatically Detected Nonverbal Behavior Predicts Creativity in Collaborating Dyads, Journal of Nonverbal Behavior, September 2014, Volume 38, Issue 3, pp 389-408, doi: 10.1007/s10919-014-0186-0
  • [24] Dutta T., Evaluation of the KinectTM sensor for 3-D kinematic measurement in the workplace, Applied Ergonomics 43 (2012) pp.645-649, doi 10.1016/j.apergo.2011.09.011
  • [25] Kongsro J., Estimation of pig weight using a Microsoft Kinect prototype imaging system, Computers and Electronics in Agriculture, Volume 109, November 2014, Pages 32–35, doi:10.1016/j.compag.2014.08.008
  • [26] Galna B., Barry G., Jackson D., Mhiripiri D., Olivier P., Rochester L., Accuracy of the Microsoft Kinect sensor for measuring movement in people with Parkinson's disease, Gait & Posture, Volume 39, Issue 4, April 2014, Pages 1062–1068, doi:10.1016/j.gaitpost.2014.01.008
  • [27] Castro C., Kinect Remote-Controlled Vehicles, Chapter, Arduino and Kinect Projects, pp. 207-241, Apress, 2012, DOI 10.1007/978-1-4302-4168-3_9
  • [28] Ng L. X., Oon S. W., Ong S. K., Nee A. Y. C., GARDE: a gesture-based augmented reality design evaluation system, International Journal on Interactive Design and Manufacturing (IJIDeM), June 2011, Volume 5, Issue 2, pp 85-94
  • [29] Leith K. Y. Chan, Henry Y. K. Lau, MagicPad: the projection based 3D user interface, International Journal on Interactive Design and Manufacturing (IJIDeM), May 2012, Volume 6, Issue 2, pp 75-81
  • [30] Lee J. Y., Rhee G. W., Woo Seo D. W., Hand gesture-based tangible interactions for manipulating virtual objects in a mixed reality environment, The International Journal of Advanced Manufacturing Technology, December 2010, Volume 51, Issue 9-12, pp 1069-1082
  • [31] Lee H., Billinghurst M., Woo W., Two-handed tangible interaction techniques for composing augmented blocks, Virtual Reality (2011) 15:133–146, DOI 10.1007/s10055-010- 0163-9
  • [32] Hürst W., van Wezel C., Gesture-based interaction via finger tracking for mobile augmented reality, Multimed Tools Appl (2013) 62:233–258, DOI 10.1007/s11042-011-0983-y
  • [33] Riecke B. E., Sigurdarson S., Milne A. P., Moving through virtual reality without moving?, Cognitive processing(2012) 13 (Suppl 1):S293–S297, DOI 10.1007/s10339-012-0491-7
  • [34] Kelly J. W., Donaldson L. S., Sjolund L. A., Freiberg J. B., More than just perception–action recalibration: Walking through a virtual environment causes rescaling of perceived space, Attention, Perception & Psychophysics, (2013) 75:1473–1485, DOI 10.3758/s13414-013-0503-4
  • [35] Official website of Voxel Game engine http://www.voxelgame.com/
  • [36] Szostek K., Piórkowski A., OpenGL in Multi-User Web-Based Applications, Innovations in Computing Sciences and Software Engineering, 2010, pp 379-383
  • [37] Ogiela M. R., Hachaj T., Natural User Interfaces in Medical Image Analysis, Springer Verlag, Cham Heidelberg, 2015, ISBN 978-3-319-07799-4
  • [38] Official website of GDL technology http://cci.up.krakow.pl/gdl/
  • [39] Hachaj T., Ogiela M. R., Full-body gestures and movements recognition: user descriptive and unsupervised learning approaches in GDL classifier, Proc. SPIE 9217, Applications of Digital Image Processing XXXVII, 921704 (September 23, 2014); doi:10.1117/12.2061171
  • [40] Quattoni A., Darrell T., Latent-Dynamic Discriminative Models for Continuous Gesture Recognition, Computer Vision and Pattern Recognition, 2007. CVPR '07. IEEE Conference on, IEEE, pp. 1-8 (2007)
  • [41] Bossard C., Kermarrec G., Buche C., Tisseau J., Transfer of learning in virtual environments: a new challenge?, Virtual Reality (2008) 12:151–161, DOI 10.1007/s10055-008-0093-y
  • [42] Richard E., Tijou A., Richard P., Ferrier J-L, Multi-modal virtual environments for education with haptic and olfactory feedback, Virtual Reality (2006) 10:207–225, DOI 10.1007/s10055-006- 0040-8
Uwagi
Opracowanie ze środków MNiSW w ramach umowy 812/P-DUN/2016 na działalność upowszechniającą naukę (zadania 2017).
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-8befa945-3205-4c5f-8791-0ce569b352ba
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.