Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Tytuł artykułu

Voiceless Stop Consonant Modelling and Synthesis Framework Based on MISO Dynamic System

Treść / Zawartość
Warianty tytułu
Języki publikacji
A voiceless stop consonant phoneme modelling and synthesis framework based on a phoneme modeling in low-frequency range and high-frequency range separately is proposed. The phoneme signal is decomposed into the sums of simpler basic components and described as the output of a linear multiple-input and single-output (MISO) system. The impulse response of each channel is a third order quasi-polynomial. Using this framework, the limit between the frequency ranges is determined. A new limit point searching three-step algorithm is given in this paper. Within this framework, the input of the low-frequency component is equal to one, and the impulse response generates the whole component. The high-frequency component appears when the system is excited by semi-periodic impulses. The filter impulse response of this component model is single period and decays after three periods. Application of the proposed modelling framework for the voiceless stop consonant phoneme has shown that the quality of the model is sufficiently good.
Opis fizyczny
Bibliogr. 43 poz., rys., tab., wykr.
  • Institute of Mathematics and Informatics, Vilnius University, 4 Akademijos Str., Vilnius LT-08663, Lithuania
  • Audio Acoustics Laboratory, Faculty of Electronics, Telecommunications and Informatics, Gdansk University of Technology, G. Narutowicza 11/12, 80-233 Gdansk, Poland
  • 1. AGH Corpora, Audiovisual Polish Speech Corpus, wdgpivxrpln (accessed Jan., 2017).
  • 2. Bergier M. (2014), Instruction and production training practice on awareness raising, awareness in action: the role of consciousness in language acquisition, [in:] Second language learning and teaching, Łyda A., Szczesniak K. [Eds.], Springer International Publishing, doi: 10.1007/978-3-319-00461-7_7.
  • 3. Birkholz P. (2013), Modeling consonant-vowel coarticulation for articulatory speech synthesis, PLoS ONE 8, 4, e60603, doi: 10.1371/journal.pone.0060603.
  • 4. Brocki Ł., Marasek K. (2015), Deep belief neural networks and bidirectional long-short term memory hybrid for speech recognition, Archives of Acoustics, 40, 2, 191-195, doi: 10.1515/aoa-2015-0021.
  • 5. Chai T., Draxler R. R. (2014), Root mean square error (RMSE) or mean absolute error (MAE)? Arguments against avoiding RMSE in the literature, Geoscientific Model Developement, 7, 1247-1250, doi: 10.5194/gmd-7-1247-2014.
  • 6. Czyżewski A., Kostek B., Bratoszewski P., Kotus J., Szykulski M. (2017), An audio-visual corpus for multimodal automatic speech recognition, J. of Intelligent Information Systems, 1, 1-26, doi: 10.1007/s10844-016-0438-z.
  • 7. Demenko G., Mobius B., Klessa K. (2010), Implementation of Polish speech synthesis for the boss system, Bulletin of the Polish Academy of Sciences Technical Sciences, 58, 3, doi: 10.2478/V10175-010-0035-1,
  • 8. Domagała P., Richter L. (1994), Automatic discrimination of Polish stop consonants based on bursts analysis, Archives of Acoustics, 19, 2, 147-159,
  • 9. Driaunys K., Rudžionis V., Žvinys P. (2005), Analysis of vocal phonemes and fricative consonant discrimination based on phonetic acoustics features, Information Technology and Control, 34, 3, 257-262.
  • 10. Dziubiński M., Kostek B. (2005), Octave error immune and instantaneous pitch detection algorithm, Journal of New Music Reseach, 34, 3, 273-292.
  • 11. Gardzielewska H., Preis A. (2007), The intelligibility of Polish speech synthesized with a new sinewave synthesis method, Archives of Acoustics, 32, 3, 579-589.
  • 12. Gussmann E. (2007), The phonology of Polish, New York: Oxford University Press.
  • 13. Igras M., Ziółko B., Jadczyk T. (2013), Audiovisual database of Polish speech recordings, Studia Informatica, 33, 2b, 163-172.
  • 14. Jadczyk T., Ziółko M. (2015), Audio-visual speech processing system for Polish with dynamic Bayesian Network Models, Proceedings of the World Congress on Electrical Engineering and Computer Systems and Science (EECSS 2015), Barcelona, Spain, July 13-14, Paper No. 343.
  • 15. Jassem W. (2003), Polish, Journal of the International Phonetic Association, 33, 103-107.
  • 16. Johannessen J. B., Hagen K., Priestley J. J., Nygaard L. (2007), An advanced speech corpus for Norwegian, Proceedings of the 16th Nordic Conference of Computational Linguistics Nodalida-2007, 29-36, Tartu, Estonia, ISBN 978-9985-4-0513-0.
  • 17. Korzinek D., Marasek K., Brocki Ł. (2011), Automatic transcription of Polish radio and television broadcast audio, Intelligent Tools for Building a Scientific Information Platform, Vol. 467, pp. 489-497, Springer.
  • 18. Krynicki G. (2006), Contrasting selected aspects of Polish and English phonetics, (accessed Jan. 2017).
  • 19. Labarre T. (2011), LING550: CLMS project on Polish,
  • 20. Ladefoged P., Disner S. F. (2012), Vowels and consonants, 3rd Ed., Ladefoged P. [Ed.], Wiley-Blackwell, Chichester.
  • 21. Oliver D., Szklanny K. (2006), Creation and analysis of a Polish speech database for use in unit selection synthesis, (accessed Jan. 2017).
  • 22. Oostdijk N. (2000), The spoken Dutch corpus. Overview and first evaluation, Proceedings of LREC 2000, pp. 887-894, Athens, Greece.
  • 23. Pinnis M., Auziňa I. (2010), Latvian text-to-speech synthesizer, Proceedings of the 2010 Conference on Human Language Technologies – The Baltic Perspective: Proceedings of the Fourth International Conference Baltic HLT 2010, pp. 69-72, Riga, Latvia: IOS Pres, doi:10.3233/978-1-60750-641-6-6.
  • 24. Pinnis M., Auziňa I., Goba K. (2014), Designing the Latvian speech recognition corpus, Proceedings of 9th International Conference on Language Resources and Evaluation, LREC’14, pp. 1547-1553.
  • 25. Pyž G., Šimonyté V., Slivinskas V. (2011), Modelling of Lithuanian speech diphthongs, Informatica, 22, 3, 411-434.
  • 26. Pyž G., Šimonyté V., Slivinskas V. (2014), Developing models of Lithuanian speech vowels and semivowels, Informatica, 25, 1, 55-72.
  • 27. Raitio T., Lu H., Kane J., Suni A., Vainio M., King S., Alku P. (2014), Voice source modelling using deep neural networks for statistical parametric speech synthesis, [in:] European Signal Processing Conference, 6952838, European Signal Processing Conference, EUSIPCO, pp. 2290-2294, 22nd European Signal Processing Conference, EUSIPCO 2014, Lisbon, United Kingdom, 1-5 September.
  • 28. Raškinis A., Dereškeviciuté S. (2007), An analysis of spectral attributes, characterizing the interaction of lithuanian voiceless velar stop consonants with their pre- and postvocalic context, Information Technology and Control, 36, 1, 68-75.
  • 29. Ringys, T., Slivinskas, V. (2010), Lithuanian language vowel formant modelling using multiple input and single output linear dynamic system with multiple poles, Proceedings of the 5th International Conference on Electrical and Control Technologies (ECT-2010), pp. 117-120.
  • 30. SAMPA Homepage (2005) [in Polish], (last revised 2005; accessed Jan. 2017).
  • 31. SAMPA Homepage (2005), (last revised 2005; accessed Jan. 2017).
  • 32. Sasirekha D., Chandra E. (2012), Text to speech: a simple tutorial, International Journal of Soft Computing and Engineering (IJSCE), 2, 1, 275-278.
  • 33. Stânescu M., Cucu H., Buzo A., Burileanu C. (2012), ASR for low-resourced languages: building a phonetically balanced Romanian speech corpus, Proceedings of 20th European Signal Processing Conference, pp. 2060-2064.
  • 34. Stevens K. N. (1993), Modelling affricate consonants, Speech Communication, 13, 1-2, 33-43.
  • 35. Tabet Y., Boughazi M. (2011), Speech synthesis techniques. A survey, 7th International Workshop on Systems, Signal Processing and Their Applications (WOSSPA), pp. 67-70.
  • 36. Tamulevičius G., Kaukėnas J. (2016), Adequacy analysis of autoregressive model for Lithuanian semivowels, Advances in Information, Electronic and Electrical Engineering (AIEEE), 2016 IEEE 4th Workshop on, doi: 10.1109/AIEEE.2016.7821825.
  • 37. Tokuda K., Nankaku Y., Toda T., Zen H., Yamagishi J., Oura K. (2013), Speech synthesis based on hidden Markov Model, Proceedings of the IEEE, 101, 5, 1234-1252.
  • 38. Upadhyaya P., Farooq O., Abidi M. R., Varshney P. (2015), Comparative study of visual feature for bimodal Hindi speech recognition, Archives of Acoustics, 40, 4, 609-619, doi: 10.1515/aoa-2015-0061.
  • 39. VoxForge (2017), (accessed Jan. 2017).
  • 40. Żelasko P., Ziółko B., Jadczyk T., Skurzok D. (2016), AGH corpus of Polish speech, Language Resources and Evaluation, 50, 3, 585-601, doi: 10.1007/S10579-015-9302-Y.
  • 41. Zen H., Tokuda K., Black A. W. (2009), Statistical parametric speech synthesis, Speech Communication, 51, 11, 1039-1064.
  • 42. Ziółko B., Gałka J., Suresh M., Wilson R., Ziółko M. (2009), Triphone statistics for Polish language, Human Language Technology: Challenges of the Information Society, LTC 2007, Lecture Notes in Computer Science, Vol. 5603, pp. 63-73, Springer, Berlin, Heidelberg.
  • 43. Ziółko B., Ziółko M. (2011), Time durations of phonemes in Polish language for speech and speaker recognition, Human Language Technology. Challenges for Computer Science and Linguistics. Lecture Notes in Computer Science, Vol. 6562, 105-114, Springer Verlag.
Opracowanie ze środków MNiSW w ramach umowy 812/P-DUN/2016 na działalność upowszechniającą naukę (zadania 2017).
Typ dokumentu
Identyfikator YADDA
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.