PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Tytuł artykułu

Estimation of Hardware Requirements for Isolated Speech Recognition on an Embedded Systems

Treść / Zawartość
Identyfikatory
Warianty tytułu
Języki publikacji
EN
Abstrakty
EN
In recent years, speech recognition functionality is increasingly being added in embedded devices. Because of limited resources in these devices, there is a need to assess whether the defined speech recognition system is feasible within given constraints, as well as estimating how many resources the system needs. In this paper, an attempt has been taken to define a technique for estimating hardware resources usage in the speech recognition task. To determine the parameters and their dependencies in this task, the two systems were tested. The first system utilized Dynamic Time Warping pattern matching technique, the second used Hidden Markov Models. For each case, the measurement of recognition rate and time, vocabulary database size and learning time has been performed. Obtained results have been exploited to define linear and polynomial regression models, and finally, an estimation algorithm has been developed using these models. After testing proposed approach, it was observed that even low-end mobile phones have sufficient hardware resources for realisation of isolated speech recognition system.
Twórcy
autor
autor
  • Faculty of Computer Science and Information Technology, West Pomeranian University of Technology, Żołnierska 49, 71-210 Szczecin, Poland, kklobucki@wi.zut.edu.pl
Bibliografia
  • [1] Z.-H. Tan and B. Lindberg, Automatic Speech Recognition on Mobile Devices and over Communication Networks. Springer-Verlag, 2008, pp. 2–21.
  • [2] L. Rabiner and W. Schafer, Theory and Applications of Digital Speech Processing. Prentice-Hall, 2010, pp. 950–984.
  • [3] V. Amudha, B.Venkataramani, R. V. kumar, and S. Ravishankar, “Software/Hardware Co-Design of HMM Based Isolated Digit Recognition System,” Journal Of Computers, vol. 4, no. 3, pp. 154–159, 2009.
  • [4] S. Grassi, M. Ansorge, F. Pellandini, and P.-A. Farine, “Implementation of Automatic Speech Recognition for Low-Power Miniaturized Devices,” in Proceedings of 5th COST 276 Workshop on Information and Knowledge Management for Integrated Media Communication, Prague, Czech Republic, 2–3 October 2003, pp. 59–64.
  • [5] S. Jalali, Trends and Implications in Embedded Systems Development. TCS white paper, 2009.
  • [6] C. Levy, G. Linares, and J.-F. Bonastre, “GMM-Based Acoustic Modeling for Embedded Speech Recognition,” in INTERSPEECH 2006 – ICSLP, Ninth International Conference on Spoken Language Processing, Pittsburgh, PA, USA, 17-21 September 2006.
  • [7] A. Peinado and J. Segura, Speech Recognition Over Digital Channels: Robustness and Standards. John Wiley & Sons, Inc., 2006, pp. 8–77.
  • [8] P. Senin, “Dynamic time warping algorithm review,” University of Hawaii at Manoa, Tech. Rep., 2008.
  • [9] S. Young, D. Kershaw, J. Odell, D. Ollason, V. Valtchev, and P. Woodland, The HTK Book Version 3.4. Cambridge University Press, 2006.
  • [10] L. Rabiner and B.-H. Juang, Fundamentals of Speech Recognition. Prentice Hall, 1993, pp. 219–226.
  • [11] D. Larose, Data Mining Methods and Models. Wiley-IEEE Press, 2006, pp. 36–98.
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-article-BWA0-0051-0010
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.