PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Tytuł artykułu

Phonetic Segmentation using a Wavelet-based Speech Cepstral Features and Sparse Representation Classifier

Treść / Zawartość
Identyfikatory
Warianty tytułu
Języki publikacji
EN
Abstrakty
EN
Speech segmentation is the process of dividing speech signal into distinct acoustic blocks that could be words, syllables or phonemes. Phonetic segmentation is about finding the exact boundaries for the different phonemes that composes a specific speech signal. This problem is crucial for many applications, i.e. automatic speech recognition (ASR). In this paper we propose a new model-based text independent phonetic segmentation method based on wavelet packet speech parametrization features and using the sparse representation classifier (SRC). Experiments were performed on two datasets, the first is an English one derived from TIMIT corpus, while the second is an Arabic one derived from the Arabic speech corpus. Results showed that the proposed wavelet packet decomposition features outperform the MFCC features in speech segmentation task, in terms of both F1-score and R-measure on both datasets. Results also indicate that the SRC gives higher hit rate than the famous k-Nearest Neighbors (k-NN) classifier on TIMIT dataset.
Rocznik
Tom
Strony
12--22
Opis fizyczny
Bibliogr. 54 poz., rys., tab.
Twórcy
  • Higher Institute for Applied Science and Technology, HIAST, Damascus, Syria
  • Higher Institute for Applied Science and Technology, HIAST, Damascus, Syria
  • Higher Institute for Applied Science and Technology, HIAST, Damascus, Syria
Bibliografia
  • [1] J. Glass, „A probabilistic framework for segment-based speech recognition", Computer Speech & Language, vol. 17, no. 2-3, pp. 137-152, 2003 (DOI: 10.1016/S0885-2308(03)00006-8).
  • [2] D. T. Chappell and J. Hansen, „A comparison of spectral smoothing methods for segment concatenation based speech synthesis", Speech Commun., vol. 36, no. 3-4, pp. 343-373, 2002 (DOI: 10.1016/S0167-6393(01)00008-5).
  • [3] J. Adell and A. Bonafonte, „Towards phone segmentation for concatenative speech synthesis", in Proc. of the 5th ISCA Speech Synthesis Workshop (SSW5), Pittsburgh, PA, USA, 2004, pp. 139-144 [Online]. Available: https://nlp.lsi.upc.edu/papers/adell04b.pdf
  • [4] H. Wang, T. Lee, C. Leung, B. Ma, and H. Li, „Acoustic Segment Modeling with Spectral Clustering Methods", in IEEE/ACM Transac. on Audio, Speech, and Language Process., vol. 23, no. 2, pp. 264-277, 2015 (DOI: 10.1109/TASLP.2014.2387382).
  • [5] J. P. Hosom, „Speaker-independent phoneme alignment Rusing transition-dependent states", vol. 51, no. 4, pp. 352-368, 2008 (DOI: 10.1016/j.specom.2008.11.003).
  • [6] J. P. van Hemert, „Automatic segmentation of speech", IEEE Transac. on Signal Process., vol. 39, no. 4, pp. 1008-1012, 1991 (DOI: 10.1109/78.80941).
  • [7] A. Ljolje, J. Hirschberg, and J. P. H. van Santen, „Automatic speech segmentation for concatenative inventory selection", in Progress In Speech Synthesis, J. P. H. van Santen, R. W. Sproat, J. P. Olive, and J. Hirschberg, Eds., New York: Springer, 1997, pp. 304-311 (DOI: 10.1007/978-1-4612-1894-4 24).
  • [8] B. L. Pellom and J. H. L. Hansen, „Automatic segmentation of speech recorded in unknown noisy channel characteristics", Speech Commun., vol. 25, no. 1-3, pp. 97-116, 1998 (DOI: 10.1016/S0167-6393(98)00031-4).
  • [9] G. A. Esposito, „Text independent methods for speech segmentation", Nonlinear Speech Model. and App., Lecture Notes in Computer Sci., G. Chollet, A. Esposito, M. Faundez-Zanuy, M. Marinaro, Eds., Berlin, Heidelberg: Springer, 2005, vol. 3445 (DOI: 10.1007/11520153 12).
  • [10] V. Khanagha, K. Daoudi, O. Pont, and H. Yahia, „Phonetic segmentation of speech signal using local singularity analysis", Digital Signal Processing, vol. 35, no. C, pp. 86-94, 2014 (DOI: 10.1016/j.dsp.2014.08.002).
  • [11] D. T. Toledano, L. A. H. Gomez, and L. V. Grande, „Automatic phonetic segmentation", IEEE Transac. on Speech and Audio Process., vol. 11, no. 6, pp. 617-625, 2003 (DOI: 10.1109/TSA.2003.813579).
  • [12] O. Scharenborg, V. Wan, and M. Ernestus, „Unsupervised speech segmentation: An analysis of the hypothesized phone boundaries", J. of Acoustical Society of America, vol. 127, no. 2, pp. 1084-1095, 2010 (DOI: 10.1121/1.3277194).
  • [13] B. D. Sarma and S. R. Mahadeva Prasanna, „Acoustic-Phonetic Analysis for Speech Recognition: A Review", IETE Technical Review, vol. 35, no. 3, pp. 305-327, 2017 (DOI: 10.1080/02564602.2017.1293570).
  • [14] M. Ziółko, J. Gałka, B. Ziółko, T. Drwięga, „Perceptual wavelet decomposition for speech segmentation", in Proc. of the Interspeech, 11th Annual Conf. of the Int. Speech Commun. Association, Makuhari, Chiba, Japan, 2010, pp. 2234-2237 (DOI: 10.21437/Interspeech.2010-614).
  • [15] D.-T. Hoang and H.-C. Wang, „Blind phone segmentation based on spectral change detection using legendre polynomial approximation", The J. of the Acoustical Society of America, vol. 137, no. 2, pp. 797-805, 2015 (DOI: 10.1121/1.4906147).
  • [16] Ö. Batur Dinler and N. Aydin, „An optimal feature parameter set based on gated recurrent unit recurrent neural networks for speech segment detection", Appl. Sci., vol. 10, pp. 1273, 2020 (DOI: 10.3390/app10041273).
  • [17] F. Kreuk, J. Keshet, and Y. Adi, „Self-supervised contrastive learning for unsupervised phoneme segmentation", Proc. of the Interspeech, pp. 3700-3704, 2020 (DOI: 10.21437/Interspeech.2020-2398).
  • [18] J. Wright, A. Yang, A. Ganesh, S. S. Sastry, and Y. Ma, „Robust face recognition via sparse representation", IEEE Transac. on Pattern Anal. and Mach. Intell., vol. 31, no. 2, pp. 210-227, 2009 (DOI: 10.1109/TPAMI.2008.79).
  • [19] X. Gu, C. Zhang, and T. Ni, „A hierarchical discriminative sparse representation classifier for EEG signal detection", IEEE/ACM Transac. on Comput. Biol. and Bioinform., vol. 18, no. 5, 2020 (DOI: 10.1109/TCBB.2020.3006699).
  • [20] Mh. Hajigholam, A. A. Raie, K. Faez, „Using sparse representation classifier (SRC) to calculate dynamic coeficients for multitask joint spatial pyramid matching", Iranian J. of Sci. and Technol., Trans. Electr. Eng., vol. 45, pp. 295-307, 2021 (DOI: 10.1007/s40998-020-00351-3).
  • [21] T. N. Sainath, A. Carmi, D. Kanevsky, and D. Ramabhadran, „Bayesian compressive sensing for phonetic classification", Proc. of the IEEE Int. Conf. on Acoustics Speech and Signal Process. (ICASSP), Dallas, TX, USA, 2010, pp. 4370-4373 (DOI: 10.1109/ICASSP.2010.5495638).
  • [22] T. N. Sainath and D. Kanevsky, „Sparse representations for speech recognition", A. Carmi, L. Mihaylova, S. Godsill Eds., book section „Compressed Sensing & Sparse Filtering", Berlin, Heidelberg: Springer, 2014, pp. 455-502 (DOI: 10.1007/978-3-642-38398-4 15).
  • [23] G. S. V. S. Sivaram, S. K. Nemala, M. Elhilali, T. D. Tran, and H. Hermansky, „Sparse coding for speech recognition", IEEE Int. Conf. on Acoustics Speech and Signal Process. (ICASSP), Dallas, TX, USA, 2010, pp. 4346-4349 (DOI: 10.1109/ICASSP.2010.5495649).
  • [24] A. Bhowmick, M. Chandra, and A. Biswas, „Speech enhancement using Teager energy operated ERB-like perceptual wavelet packet decomposition", Int. J. Speech Technol., vol. 20, pp. 813-827, 2017 (DOI: 10.1007/s10772-017-9448-7).
  • [25] P. K. Sahu, A. Biswas, A. Bhowmick, and M. Chandra, „Auditory ERB like admissible wavelet packet features for TIMIT phoneme recognition", Engineer. Sci. and Technol., an Int. J. (Elsevier), vol. 17, no. 3, pp. 145-151, 2014 (DOI: 10.1016/j.jestch.2014.04.004).
  • [26] H. Frihia and Ha. Bahi, „HMM/SVM segmentation and labelling of Arabic speech for speech recognition applications", Int. J. of Speech Technol., vol. 20, no. 3, pp. 563-573, 2017 (DOI: 10.1007/s10772-017-9427-z).
  • [27] M. Javed, M. M. A. Baig, and S. A. Qazi, „Unsupervised phonetic segmentation of classical Arabic speech using forward and inverse characteristics of the vocal tract", Arab. J. Sci. Eng., vol. 45, pp. 1581-1597, 2020 (DOI: 10.1007/s13369-019-04065-5).
  • [28] S. Dusan and L. Rabiner, „On the relation between maximum spectra transition positions and phone boundaries", Proc. of INTER-SPEECH/ICSLP, Pittsburgh, PA, USA, 2006, pp. 645-648 [Online]. Available: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.569.3209&rep=rep1&type=pdf
  • [29] P. B. Ramteke and S. G. Koolagudi, „Phoneme boundary detection from speech: a rule based approach", Speech Commun., vol. 107, no. 3, pp. 1-17, 2019 (DOI: 10.1016/j.specom.2019.01.003).
  • [30] G. Almpanidis, „Phonemic segmentation using the generalised Gamma distribution and small sample Bayesian information criterion", Speech Commun., vol. 50, no. 1, pp. 38-55, 2008 (DOI: 10.1016/j.specom.2007.06.005).
  • [31] P. Teng, X. Liu, and Y. Jia, „Text-independent phoneme segmentation via learning critical acoustic change points", Intell. Sci. and Big Data Engineer., Lecture Notes in Computer Sci., Berlin, Heidelberg: Springer, 2013, vol. 8261 (DOI: 10.1007/978-3-642-42057-3 8).
  • [32] A. H. Abo Absa, M. Deriche, M. Elshafei-Ahmed, Y. M. Elhadj, and B. Juang, „A hybrid unsupervised segmentation algorithm for Arabic speech using feature fusion and a genetic algorithm", IEEE Access, vol. 6, pp. 43157-43169, 2018 (DOI: 10.1109/ACCESS.2018.2859631).
  • [33] Y.-H. Wang, Ch.-T. Chung, and H.-Y. Lee, „Gate activation signal analysis for gated recurrent neural networks and its correlation with phoneme boundaries", Interspeech, Stockholm, Sweden, 2017 (DOI: 10.21437/INTERSPEECH.2017-877).
  • [34] F. Kreuk, Y. Sheena, J. Keshet, and Y. Adi, „Phoneme boundary detection using learnable segmental features", IEEE Int. Conf. On Acoustics, Speech and Signal Process. (ICASSP 2020), Barcelona, Spain, 2020, pp. 8089-8093 (DOI: 10.1109/ICASSP40776.2020.9053053).
  • [35] J. Franke, M. Mueller, F. Hamlaoui, S. Stueker, and A. Waibel, „Phoneme boundary detection using deep bidirectional LSTMs", Proc. of the Speech Commun.: 12. ITG Symp., Paderborn, Germany, 2016, pp. 1-5 (ISBN: 9783800742752).
  • [36] L. Lu, L. Kong, Ch. Dyer, N. A Smith, and S. Renals, „Segmental recurrent neural networks for end-to-end speech recognition", Proc. of the Interspeech, 2016, pp. 385-389 (DOI: 10.21437/Interspeech.2016-40).
  • [37] Y. H. Lee, J. Y. Yang, C. Cho, and H. Jung, „Phoneme segmentation using deep learning for speech synthesis", Proc. of the Conf. On Res. in Adaptive and Convergent Systems, Honolulu, HI, USA, 2018, pp. 59-61 (DOI: 10.1145/3264746.3264801).
  • [38] J. F. Gemmeke, T. Virtanen, and A. Hurmalainen, „Exemplar-based sparse representations for noise robust automatic speech recognition", IEEE Transac. on Audio, Speech, and Language Process., vol. 19, no. 7, 2011, pp. 2067-2080 (DOI: 10.1109/TASL.2011.2112350).
  • [39] D. Baby, T. Virtanen, J. F. Gemmeke, and H. Van Hamme, „Coupled dictionaries for exemplar-based speech enhancement and automatic speech recognition", IEEE/ACM Transac, on Audio, Speech, and Language Process., vol. 23, no. 11, pp. 1788-1799, 2015 (DOI: 10.1109/TASLP.2015.2450491).
  • [40] V.-H. Duong, M.-Q. Bui, and J.-Ch. Wang, „Dictionary learningbased speech enhancement, active learning - beyond the future", IntechOpen, 2019 (DOI: 10.5772/intechopen.85308).
  • [41] M. Hasheminejad and H. Farsi, „Frame level sparse representation classification for speaker verification", Multimedia Tools and App., vol. 76, pp. 21211-21224, 2017 (DOI: 10.1007/s11042-016-4071-1).
  • [42] Z. Zhang, Y. Xu, J. Yang, X. Li, and D. Zhang, „A Survey of sparse representation: algorithms and applications", IEEE Access, vol. 3, pp. 490-530, 2015 (DOI: 10.1109/ACCESS.2015.2430359).
  • [43] B. K. Natarajan, „Sparse approximate solutions to linear systems", SIAM J. on Comput., vol. 24, no. 2, pp. 227-234, 1995 (DOI: 10.1137/S0097539792240406).
  • [44] S. S. Chen, D. L. Donoho, and M. A. Saunders, „Atomic decomposition by basis pursuit", SIAM J. on Scientific Comput., vol. 20, no. 1, pp. 33-61, 1999 (DOI: 10.1137/S1064827596304010).
  • [45] J. A. Tropp and S. J. Wright, „Computational methods for sparse solution of linear inverse problems", Proc. of the IEEE, vol. 98, no. 6, 2010, pp. 948-958 (DOI: 10.1109/JPROC.2010.2044010).
  • [46] SPGL1: A solver for large-scale sparse reconstruction [Online]. Available: https://www.cs.ubc.ca/ mpf/spgl1/index.html
  • [47] Matlab Benchmark Scripts, L-1 Benchmark Package [Online]. Available: http://people.eecs.berkeley.edu/~yang/software/l1benchmark/l1benchmark.zip
  • [48] O. J. Räsänen, U. K. Laine, and T. Altosaar, „An improved speech segmentation quality measure: the R-value", Interspeech, 10th Annual Conf. of the Int. Speech Commun. Association, Brighton, United Kingdom, pp. 1851-1854, 2009.
  • [49] J. S. Garofolo et al., „TIMIT acoustic-phonetic continuous speech corpus", 1993 (DOI: 10.35111/17gk-bn40).
  • [50] Arabic Speech Corpus: a single-speaker, Modern Standard Arabic speech corpus made for high quality speech synthesis [Online]. Available: http://www.arabicspeechcorpus.com
  • [51] C. Lopes and F. Perdigao, „Phoneme recognition on the TIMIT database, speech technologies", IntechOpen, 2011 (DOI: 10.5772/17600).
  • [52] N. Halabi, „Modern standard Arabic phonetics for speech synthesis", University of Southampton, Electronics & Computer Sci., Ph.D. Thesis, 2016 [Online]. Available: https://eprints.soton.ac.uk/409695/1/Nawar Halabi PhD Thesis Revised.pdf
  • [53] Praat: doing phonetics by computer [Online]. Available: https://www.fon.hum.uva.nl/praat/
  • [54] C. S. Burrus and J. E. Odegard, „Coiet systems and zero moments", IEEE Transac. on Signal Process., vol. 46, no. 3, pp. 761, 1998 (DOI: 10.1109/78.661342).
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-5b23684a-8b78-4305-adfe-0f223edf6e5e
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.