Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Tytuł artykułu

Automatic recognition of signed Polish expressions

Wybrane pełne teksty z tego czasopisma
Warianty tytułu
Human Language Technologies as a challenge for Computer Science and Linguistics (2; 21-23.04.2005; Poznań, Poland)
Języki publikacji
The paper considers recognition of single sentences of the Polish Sign Language. We use a canonical stereo system that observes the signer from a frontal view. Feature vectors take into account information about the hand shape and orientation, as well as 3D position of the hand with respect to the face. Recognition based on human skin detection and hidden Markov models (HMMs) is performed on line. We focus on 35 sentences and a 101 word vocabulary that can be used at the doctor's and at the post office. Details of the solution and results of experiments with regular and parallel HMMs are given.
Opis fizyczny
Bibliogr. 17 poz. rys., tab.
  • Computer and Control Engineering Chair, Rzeszow University of Technology, W. Pola 2, 35-959 Rzeszów, Poland
  • Computer and Control Engineering Chair, Rzeszow University of Technology, W. Pola 2, 35-959 Rzeszów, Poland
  • [1] B. Bauer, H. Hienz and K. F. Kraiss: Video-based sign language recognition using statistical methods. Proc. of the ICPR'OO. Barcelona. (2000), 2463-2466.
  • [2] C. Charayaphan and A. E. Marble: Image processing system for interpreting motion in american sign language. J. Biomed. Eng., 14 (1992). 419-425.
  • [3] K. Grobel and M. Assam: Isolated sign language recognition using hidden Markov models. Proc. of the IEEE Int. Conf. on SMC. Orlando. (1997), 162-167.
  • [4] J. K. Hendzel: Dictionary of the Polish Sign Language. OFFER Press, Olsztyn. 3rd edition. 1997.
  • [5] T. Kapuściński and M. Wysocki: Hand skin colour identification in different colour spaces. Archives of Theor. and Appl. Informatics. 13(1), (2001). 53-68.
  • [6] T. Kapuściński and M. Wysocki: Recognition of isolated words of the polish sign language. Proc. of the CORES'05, Computer Recognition Systems, Springer, Berlin, Heidelberg, New York, (2005). 697-704.
  • [7] S. C. W. Ong and S. Ranganath: Automatic sign language analysis: A survey and future beyond lexical meaning. IEEE Trans. PAMI, 27(6). (2005), 873-891.
  • [8| V. I. Pavlovic, R. Sharma and T. S. Huang: Visual interpretation of hand gestures for human-computer interaction: A review. IEEE Trans. PAMI, 19(7), (1997), 677-693.
  • [9] L. R. Rabiner: A tutorial on hidden Markov models and selected applications in speech recognition. Proc. of the IEEE. 77(2), (1989), 257-286.
  • [10] R. Rosenfeld: Two decades of statistical language modeling: Where do we go from here? Proceedings of the IEEE. 88(8), (2002), 1270-1278.
  • [11] T. Starner, J. Weaver and A. Pentland: Real-time american sign language recognition using desk and wearable computer based video. IEEE Trans. PAMI, 20(12), (1998), 1371-1375.
  • [12] N. Suszczańska, P. Szmal and J. Francik: Translation polish text into sign language in the tgt system. Proc. of the 20th IASTED Inter. Multiconference Applied Informatics, Insbruck, (2002), 282-287.
  • [13] B. Szczepankowski: Sign language in school. WSiP. Warszawa. 1988.
  • [14] S. Tamura and S. Kawasaki: Recognition of sign language motion images. Pattern Recognition. 21 (1988). 343-353.
  • [15] S. Theodoridis and K. Kontroumbas: Pattern Recognition. Acad. Press, London, 1999.
  • [16] C. Vogler and D. Metaxas: A framework for recognizing the simultaneous aspects of american sign language. Computer vision and Image Understanding, 81 (2001), 358-384.
  • [17] S. Young et al: The HTK Book. Microsoft Corporation, 2000.
Typ dokumentu
Identyfikator YADDA
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.