PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Powiadomienia systemowe
  • Sesja wygasła!
Tytuł artykułu

Spanish Sign Language Interpreter for Mexican Linguistics

Autorzy
Treść / Zawartość
Identyfikatory
Warianty tytułu
Języki publikacji
EN
Abstrakty
EN
We present here the first visual interface for a Mexican Spanish Sign Language translator on its first development stage: sign-writing recognition. The software was developed for the unique characteristics of Mexican linguistics and was designed in order to use sentences or a sequence of signs in sign-writing system which are decoded by the program and converted into a series of images with movement that correspond to the Mexican sign language system. Using a lexical, syntactic and semantic algorithms plus free software such as APIss from Java, video converter software, data base manager like MySQL, Postgres and SQlite, was possible to read and interpret the rich and complex Mexican language. Our application for visual interface showed to be capable of reading and reconstruct each sentence used for the interpreter and translate it into a high definition video. The average time of video display vs number of sentences to interpret, probed to be in linear relation with an average time of two seconds per sentence. The software has overcome the problem of homonym words frequently used in Spanish language and verb tense relation for each sentence, special symbols such as #, %, $, etc. are still not recognized into the software.
Rocznik
Strony
75--85
Opis fizyczny
Bibliogr. 17 poz., rys., tab.
Twórcy
autor
  • University ITS Chapala at Mexico
Bibliografia
  • 1. Aranda., B. E., 2008. La vulneracin de los derechos humanos de las personas Sordas en Mxico. Comision Nacional de los Derechos Humanos, CNDH.
  • 2. R. Barra, R. Crdoba, L.F. Haroa, F. Fernndeza, J. Ferreirosa, J.M. Lucasa, J. Macas-Guarasab, J.M. Monteroa and J.M. Pardoa, Speech to sign language translation system for Spanish, Aplied Soft Computing, 2008.
  • 3. Comparn, J. J., 1999. Lengua Espaola I. Mxico: AMATE. Discapacidades, E. C. La sor dera y la prdida de la capacidad auditiva., http://www.sitiodesordos.com.ar/sordera.htm.
  • 4. Ding Lilia, Modelling and recognition of the linguistic components in American Sign Language, Aplied Soft Computing, 2008, 421, 105.
  • 5. Dons, R., & Ortz, C., 2005, XXXV Simposio Internacional De La Sel: http://www3.unileon.es/dp/dfh/SEL/actas.htm.
  • 6. J. Earley, An Efficient Context-Free Parsing Algorithm, PhD tesis, University of California, Berkeley, California, 1970, pp. 94-102, http://www-2.cs.cmu.edu/afs/cs.cmu.edu/project/cmt-55/lti/Courses/711/Class-notes/p94-earley.pdf.
  • 7. El Universal. 2006, : http://www.eluniversal.com.mx/articulos/30484.html.
  • 8. Estrada, B., 2008, Sordos: www.sordos.org.mx/articulo.doc.
  • 9. Galicia, S. N., 2000, Instituto Politcnico Nacional Centro de Investigacin en Computacin Laboratorio de Lenguaje Natural,Anlisis sintctico: http://www.gelbukh.com/Tesis/Sofia/tesisfinal.htm.
  • 10. Garca, J. R., & Giner, B., 2007, Pearson, Prentice Hall.
  • 11. Leybon I. J, Ramirez B. M.R., Picazo T. V., Photo-Electric Sensor Applied to Hand Fingers Movement, Computacin y Sistemas, 2006, 10, 556.
  • 12. Lodares, J. R., Aplicaciones Lexemticas a la Enseanza Del Espaol, 2009, Clarn, Revista de Nueva Literatura, 78.
  • 13. J. M. Montero M., Desarrollo de un Entorno para el Anlisis Sintctico de una Lengua Natural, Universidad Politcnica de Madrid, Espaa, 2004, http://lorien.die.upm.es/juancho/pfcs/JMMM/pfcjmmm.pdf.
  • 14. Nuno, R. (1998). Correlatos neurofisiolgicos del lenguaje de senas en el nino sordo. Proyeto de Investigacin.
  • 15. J.M. Pardoa, J. Ferreirosa, V. Samaa, R. Barra-Chicotea, J.M. Lucasa, D. Snchezb and A. Garcab Spoken Spanish generation from sign language, 2009, Aplied Soft Computing, 123.
  • 16. Dr.Sami M.Halawani, Arabic Sign Language Translation System On Mobile Devices, International Journal of Computer Science and Network Security, 2008, 8, 1.
  • 17. Suphattharachai Chomphan, Towards the Development of Speaker-Dependent and Speaker-Independent Hidden Markov Model-Based Thai Speech Synthesis, 2009, Journal of Computer Science, 5, 905.
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-d98e6805-f6cf-4b05-bd00-2686ede34c9d
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.