PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Powiadomienia systemowe
  • Sesja wygasła!
  • Sesja wygasła!
  • Sesja wygasła!
Tytuł artykułu

Rapid Text Entry Using Mobile and Auxiliary Devices for People with Speech Disorders Communication

Treść / Zawartość
Identyfikatory
Warianty tytułu
Języki publikacji
EN
Abstrakty
EN
The article considers information technology for the realization of human communication using residual human capabilities, obtained by organizing text entry using mobile and auxiliary devices. The components of the proposed technology are described in detail: the method for entering text information to realize the possibility of introducing a limited number of controls and the method of predicting words that are most often encountered after words already entered in the sentence. A generalized representation of the process of entering text is described with the aid of an ambiguous virtual keyboard and the representation of control signals for the selection of control elements. The approaches to finding the optimal distribution of the set of alphabet characters for different numbers of control signals are given. The method of word prediction is generalized and improved, the statistical language model with "back-off" is used, and the approach to the formation of the training corpus of the spoken Ukrainian language is proposed.
Twórcy
  • Glushkov Institute of Cybernetics of NAS of Ukraine and Taras Shevchenko National University of Kyiv, Ukraine
  • National University of Khmelnytsky, Ukraine
  • National University of Khmelnytsky, Ukraine
  • Lublin University of Technology, Lublin, Poland
  • East Kazakhstan State Technical University named after D. Serikbayev, Ust-Kamenogorsk, Kazakchstan
  • Institute of Information and Computational Technologies CS MES RK, Almaty, Kazakhstan
Bibliografia
  • [1] Augmentative and Alternative Communication (AAC), in http://www.asha.org/public/speech/disorders/AAC/
  • [2] I. G. Kryvonos, I. V. Krak, O. V. Barmak, and A. I. Kulias, “Methods to Create Systems for the Analysis and Synthesis of Communicative Information,” Cybernetics and Systems Analysis, vol. 53, no. 6, pp. 847–856, 2017.
  • [3] I. V. Krak, I. G. Kryvonos, O. V. Barmak, and A. S. Ternov, “An Approach to the Determination of Efficient Features and Synthesis of an Optimal Band-Separating Classifier of Dactyl Elements of Sign Language,” Cybernetics and Systems Analysis, vol. 52, no. 2, pp. 173–180, 2016.
  • [4] Iu. G. Kryvonos, Iu. V. Krak, O. V. Barmak, and D. V. Shkilniuk, “Construction and identification of elements of sign communication,” Cybernetics and Systems Analysis, vol. 49, no. 2, pp. 163–172, 2013.
  • [5] Iu. V. Krak, O. V. Barmak, and S. O. Romanyshyn, “The method of generalized grammar structures for text to gestures computer-aided translation,” Cybernetics and Systems Analysis, vol. 50, no. 1, pp. 116–123, 2014.
  • [6] Iu. G. Kryvonos, Iu. V. Krak, O. V. Barmak, and R. O. Bagriy, “New tools of alternative communication for persons with verbal communication disorders,” Cybernetics and Systems Analysis, vol. 52, no. 5, pp. 665–673, 2016.
  • [7] A. M. Cook, and J. M. Polgar, “Cook and Hussey's Assistive Technologies: Principles And Practice,” Elsevier, pp. 592, 2015.
  • [8] P. Dowden, A. Cook, J. Reichle, D. Beukelman, and J. Light, Choosing effective selection techniques for beginning communicators, Implementing an augmentative communication system: exemplary strategies for beginning communicators, Baltimore, MD: Paul H. Brookes Publishing Co, 2002, pp. 395–429.
  • [9] M. Silfverberg, I. S. MacKenzie, P. Korhonen, “Predicting text entry speed on mobile phones,” Proceedings of the ACM Conference on Human Factors in Computing Systems, 2000, pp. 9–16.
  • [10] Y. V. Krak, A. V. Barmak, R. A. Bagriy, and I. O. Stelya, “Text entry system for alternative speech communications,” Journal of Automation and Information Sciences, vol. 49, no. 1, pp. 65–75, 2017.
  • [11] D. Grover, M. King, C. Kuschler, “Patent No. US5818437, Reduced keyboard disambiguating computer,” in Tegic Communications, Inc., Seattle, 1998.
  • [12] D. Jurafsky, and J. H. Martin, Speech and Language Processing, 2nd edition, New Jersey, Prentice Hall Inc., 2015, p. 1024.
  • [13] Iu. G. Kryvonos, Iu. V. Krak, O. V. Barmak, and R. O. Bagriy, “Predictive text typing system for the Ukrainian language,” Cybernetics and systems analysis, vol. 53, no. 4, pp. 495–502, 2017.
  • [14] D. Kamińska, A. Pelikant, “Spontaneous emotion recognition from speech signal using multimodal classification,” Informatyka, Automatyka, Pomiary w Gospodarce i Ochronie Srodowiska – IAPGOS, vol. 2, no. 3, pp. 36–39, 2012.
  • [15] S.V. Pavlov, V.P. Kozhemiako, P.F. Kolesnik, et al., Physical principles of biomedical optics: monograph, Vinnytsya, VNTU, 2010, pp. 152.
  • [16] E. Majda-Zdancewicz, A. Dobrowolski, “Text Independent Automatic Speaker Recognition System using fusion of features,” Przeglad Elektrotechniczny, vol. 91, no. 10, pp. 247–251, 2015.
  • [17] V. Vassilenko, S. Valtchev, J. P. Teixeira, and S. Pavlov, “Energy harvesting: an interesting topic for education programs in engineering specialities,” Internet, Education, Science, pp. 149-156, 2016.
Uwagi
Opracowanie rekordu ze środków MNiSW, umowa Nr 461252 w ramach programu "Społeczna odpowiedzialność nauki" - moduł: Popularyzacja nauki i promocja sportu (2020).
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-95e97f5b-3336-4d19-8830-af01f575e1d5
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.