Considering the continued drive of human needs along with the constant improvement of technology, it is convenient to develop techniques that can enhance communication between computers and humans in the most intuitive ways possible. The possibility of automatically recognizing human gestures using artificial vision (among other kinds of sensors) allows us to explore a whole range of applications to control and interact with environments. Nowadays, most approaches for gesture recognition using sensors agree in the use of vision, myography, and movement devices that are applied to robotic, medical, and industrial applications. In the context of this work, we study the principles of using both vision and body contact sensing applied to the automatic classification of a human gesture set. For this, two different approaches have been evaluated: feed-forward neural networks, and hidden Markov models. These models have been studied and implemented for recognizing up to eight different human hand gestures that are commonly applied in collaborative robotics tasks.
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.