PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Tytuł artykułu

Estimation and tracking of fundamental, 2nd and 3d harmonic frequencies for spectrogram normalization in speech recognition

Treść / Zawartość
Identyfikatory
Warianty tytułu
Języki publikacji
EN
Abstrakty
EN
A stable and accurate estimation of the fundamental frequency (pitch, F0) is an important requirement in speech and music signal analysis, in tasks like automatic speech recognition and extraction of target signal in noisy environment. In this paper, we propose a pitch-related spectrogram normalization scheme to improve the speaker – independency of standard speech features. A very accurate estimation of the fundamental frequency is a must. Hence, we develop a non-parametric recursive estimation method of F0 and its 2nd and 3d harmonic frequencies in noisy circumstances. The proposed method is different from typical Kalman and particle filter methods in the way that no particular sum of sinusoidal model is used. Also we tend to estimate F0 and its lower harmonics by using novel likelihood function. Through experiments under various noise levels, the proposed method is proved to be more accurate than other conventional methods. The spectrogram normalization scheme makes a mapping of real harmonic structure to a normalized structure. Results obtained for voiced phonemes show an increase in stability of the standard speech features – the average within-phoneme distance of the MFCC features for voiced phonemes can be decreased by several percent.
Rocznik
Strony
71--81
Opis fizyczny
Bibliogr. 38 poz., rys., tab.
Twórcy
autor
autor
autor
  • Signal Processing Lab., School of Integrated Design Engineering, Keio University, 3-14-1 Hiyoshi, Yokohama 223-8522, Japan, W.Kasprzak@elka.pw.edu.pl
Bibliografia
  • [1] J. Benesty, M.M. Sondhi, and Y. Huang, Springer Handbook of Speech Processing, Springer, Berlin, 2008.
  • [2] G. Demenko, B. M¨obius, and K. Klessa, “Implementation of Polish speech synthesis for the BOSS system”, Bull. Pol. Ac.: Tech. 58 (3), 371–376 (2010).
  • [3] M.M. Goodwin, “The STFT, sinusoidal models, and speech modification”. in: Springer Handbook of Speech Processing, pp. 229–258, Springer, Berlin, 2008.
  • [4] U. Glavitsch: “Speaker normalization with respect to F0: a perceptual approach”, in: TIK-Report No. 185, Eidgen¨ossische Technische Hochschule Z¨urich, Z¨urich, 2003.
  • [5] D.O’Shaughnessy, “Formant estimation and tracking”, in: Springer Handbook of Speech Processing, pp. 213–227, Springer, Berlin, 2008.
  • [6] R.W. Schafer, “Homomorphic systems and cepstrum analysis of speech”, in: Springer Handbook of Speech Processing, pp. 161–180, Springer, Berlin, 2008.
  • [7] W.J. Hess, “Pitch and voicing determination”, in: Advances in Speech Signal Processing, eds. S. Furui and M.M. Sondhi, pp. 3–48, Marcel Dekker. Inc., New York, 1992.
  • [8] A. de Cheveign’e and H. Kawahara, “Comparative evaluation of F0 estimation algorithms”, Proc. Eurospeech 1, 2451–2454 (2001).
  • [9] M. Unoki and T. Hosorogiya, “Estimation of fundamental frequency of reverberant speech by utilizing complex cepstrum analysis”, J. Signal Processing 12 (1), 31–44 (2008).
  • [10] H. Kawahara, H. Katayose, A. de Cheveign’e and R.D. Patterson, “Fixed point analysis of frequency to instantaneous frequency mapping for accurate estimation of F0 and periodicity”, Proc. Eurospeech 1999, 2781–2784 (1999).
  • [11] A. de Cheveign’e and H. Kawahara, “Yin, a fundamental frequency estimator for speech and music”, J. Acoust. Soc. Am. 111 (4), 1917–1930 (2002).
  • [12] T. Miwa, Y. Tadokoro, and T. Saito, “The pitch estimation of different musical instruments sounds using comb filters for transcription”, IEICE Trans. D-2, J81-D-2 (9), 1965–1974 (1998).
  • [13] T. Nakatani and T. Irino, “Robust and accurate fundamental frequency estimation based on dominant harmonic components”, J. Acoust. Soc. Am., 116 (6), 3690–3700 (2004).
  • [14] Y. Ishimoto, M. Unoki, and M. Akagi, “A fundamental frequency estimation method for noisy speech based on instantaneous amplitude and frequency”, Proc. EuroSpeech 2001, 2439–2442 (2001).
  • [15] Y. Atake, T. Irino, H. Kawahara, J. Lu, S. Nakamura, and K. Shinkano, “Robust estimation of fundamental frequency using instantaneous frequencies of harmonic components”, IEICE Proc. D-2, J83-D-2 (11), 2077–2086 (2000).
  • [16] C. Dubois and M. Davy, “Joint detection and tracking of timevarying harmonic components: a flexible bayesian approach”, IEEE Trans. on Audio Speech and Language Processing 15 (4), 1283–1295 (2007).
  • [17] S. Kim, A.S. Paul, E.A. Wan, and J. McNames, “Multiharmonic tracking using sigmapoint Kalman filter”, IEEE EMBC 8, CD-ROM (2008).
  • [18] K. Nishi, M. Abe, and S. Ando, “Multiple pitch tracking and harmonic segregation algorithm for auditory scene analysis”, The Society of Instrument and Control Engineers 34 (6), 483–490 (1988), (in Japanese).
  • [19] S. Hainsworth and M. Macleod, “Beat tracking with particle filtering algorithms”, Proc. WASPAA 1, 91–94 (2003).
  • [20] S. Tomoike and M. Akagi, “Estimation of local peaks based on particle filter in advance environments”, J. Signal Processing 12 (4), 303–306 (2008).
  • [21] L. Lee and R. Rose, “A frequency warping approach to speaker normalization”, IEEE Trans. on Speech and Audio Processing 6 (1), 49–60 (1998).
  • [22] P. Dognin, “A bandpass transform for speaker normalization”, Ph.D. Dissertation, University of Pittsburgh, Pittsburgh, 2003.
  • [23] H. Traunm¨uller and F. Lacerda, “Perceptual relativity in identification of two-formant vowels”, Speech Communication 6, 143–157 (1987).
  • [24] E. Eide and H. Gish, “A parametric approach to vocal tract length normalization”, Proc. ICASSP 1, 346–348 (1996).
  • [25] J. Laroche and M. Dolson, “New phase-vocoder techniques for real-time pitch shifting, chorusing, harmonizing, and other exotic audio modifications”, J. Audio Eng. Soc. 47 (11), 928–936 (1999).
  • [26] L.R. Rabiner, “On the use of autocorrelation analysis for pitch”, IEEE Trans. on Acoustics, Speech, and Signal Processing ASSP-25 (1), 24–33 (1997).
  • [27] T. Shimamura and H. Kobayashi, “Weighted autocorrelation for pitch extraction of noisy speech”, IEEE Trans. on Speech and Audio Processing 9 (7), 727–730 (2001).
  • [28] G.S. Ying, L.H. Jamieson, and C.D. Mitchell, “A probabilistic approach to AMDF pitch detection”, J. Acoust. Soc. Am. 95 (5), 2817–2817 (1994).
  • [29] T. Miyamoto, H. Inada, and K. Nakata, “A real time PARCOR analysis of speech by high- performance signal processors”, IEICE J66-A (7), 625–632 (1983), (in Japanese).
  • [30] T. Sakai, T. Kitamura, and E. Hayahara, “Improvement of pitch extraction method in noisy environment based on cepstrum”, Electronics, Information, and Communication Engineers 1, 299 (1995).
  • [31] D.M. Haward, “Peak-picking fundamental period estimation for hearing prostheses”, J. Acoust. Soc. Am. 86 (3), 902–910 (1989).
  • [32] B. Ristic, S. Arulampalam, and N. Gordon, Beyond the Kalman Filter. Particle Filters for Tracking, Artech House, DSTO, Boston, 2004.
  • [33] Y. Medan, E. Yair, and D. Chazan, “Super resolution pitch determination of speech”, IEEE Trans. on Signal Processing Vil. 39 (1), CD-ROM (1991).
  • [34] P. Veprek and M.S. Scordilis, “Analysis, enhancement and evaluation of five pitch determination techniques”, Speech Comm. 37, 249–270 (2002).
  • [35] B. Adamczyk, K. Adamczyk, and K. Trawiński, “Robot’s vocabluary”, IAiR Bulletin 12, CD-ROM (2000), (in Polish).
  • [36] G.-N. Hu and D.-L. Wang, “Monaural speech segregation based on pitch tracking and amplitude modulation”, IEEE Trans. on Neural Networks 15 (5), 1135–1150 (2004).
  • [37] W. Kasprzak, N. Ding, and N. Hamada: “Relaxing the WDO assumption in blind extraction of speakers from speech mixtures”, J. Telecom. and Information Technology 4, 50–58 (2010).
  • [38] F.A. Okazaki and W. Kasprzak: “A two-step approach to blind deconvolution of speech and sound sources in the time domain”, Bull. Pol. Ac.: Tech. 53 (1), 49–55 (2005).
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-article-BPG8-0071-0011
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.