PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Tytuł artykułu

Spectral Mapping Using Kernel Principal Components Regression for Voice Conversion

Autorzy
Treść / Zawartość
Identyfikatory
Warianty tytułu
Języki publikacji
EN
Abstrakty
EN
The Gaussian mixture model (GMM) method is popular and efficient for voice conversion (VC), but it is often subject to overfitting. In this paper, the principal component regression (PCR) method is adopted for the spectral mapping between source speech and target speech, and the numbers of principal components are adjusted properly to prevent the overfitting. Then, in order to better model the nonlinear relationships between the source speech and target speech, the kernel principal component regression (KPCR) method is also proposed. Moreover, a KPCR combined with GMM method is further proposed to improve the accuracy of conversion. In addition, the discontinuity and oversmoothing problems of the traditional GMM method are also addressed. On the one hand, in order to solve the discontinuity problem, the adaptive median filter is adopted to smooth the posterior probabilities. On the other hand, the two mixture components with higher posterior probabilities for each frame are chosen for VC to reduce the oversmoothing problem. Finally, the objective and subjective experiments are carried out, and the results demonstrate that the proposed approach shows greatly better performance than the GMM method. In the objective tests, the proposed method shows lower cepstral distances and higher identification rates than the GMM method. While in the subjective tests, the proposed method obtains higher scores of preference and perceptual quality.
Rocznik
Strony
39--45
Opis fizyczny
Bibliogr. 16 poz., tab., wykr.
Twórcy
autor
  • Key Laboratory of Underwater Acoustic Signal Processing of Ministry of Education Southeast University Nanjing, 210096, P.R. China
autor
  • Key Laboratory of Underwater Acoustic Signal Processing of Ministry of Education Southeast University Nanjing, 210096, P.R. China
autor
  • School of Communication Engineering, Nanjing Institute of Technology Nanjing 211167, P.R. China
Bibliografia
  • 1. Abe M., Nakamura S., Shikano K., Kuwabara H. (1998), Voice conversion through vector quantization, Proceedings of the 1998 International Conference on Acoustics, Speech, and Signal Processing, pp. 655–658, New York.
  • 2. Chen Y., Chu M., Chang E., Liu J., Liu R. (2003), Voice conversion with smoothed GMM and MAP adaptation, Proceedings of Eurospeech 2003, pp. 2413–2416, Geneva.
  • 3. Desai S., Black A. W., Yegnanarayana B., Prahallad K. (2010), Spectral mapping using artificial neural networks for voice conversion, IEEE Transactions on Audio, Speech, and Language Processing, 18, 5, 954–964.
  • 4. Helander E., Virtanen T., Nurminen J., Gabbouj M. (2010), Voice conversion using partial least squares regression, IEEE Transactions on Audio, Speech, and Language Processing, 18, 5, 912–921.
  • 5. Hwang H., Haddad R. A. (1995), Adaptive median filters: new algorithms and results, IEEE Transactions on Image Processing, 4, 4, 499–502.
  • 6. Jolliffe I. T. (1982), A note on the use of principal components in regression, Applied Statistics, 31, 3, 300–303.
  • 7. Kain A., Macon M. W. (1998), Spectral voice conversion for text-to-speech synthesis, Proceedings of the 1998 IEEE International Conference on Acoustics, Speech, and Signal Processing, pp. 285–288, Seattle.
  • 8. Mesbahi L., Barreaud V., Boeffard O. (2007), GMM-based speech transformation systems under data reduction, Proceedings of the 6th ISCA workshop on Speech Synthesis, pp. 119–124, Bonn.
  • 9. Reynolds D. A., Quatieri T. F., Dunn R. B. (2000), Speaker verification using adapted Gaussian mixture models, Digital Signal Processing, 10, 1, 19– 41.
  • 10. Song P., Bao Y.Q., Zhao L., Zou C.R. (2011), Voice conversion using support vector regression, Electronics Letters, 47, 18, 1045–1046.
  • 11. Song P., Jin Y., Zhao L., Zou C. R. (2012), Voice conversion based on hybrid SVR and GMM, Archives of Acoustics, 37, 2, 143–149.
  • 12. Scholkopf B., Smola A., Muller K. R. (1997), Kernel principal component analysis, Proceedings of the 7th International Conference on Artificial Neural Networks, pp. 583–588, Berlin.
  • 13. Stylianou Y., Cappe O., Moulines E. (1998), Continuous probabilistic transform for voice conversion, IEEE Transactions Speech and Audio Processing, 6, 2, 131–142.
  • 14. Toda T., Saruwatari H., Shikano K. (2001), Voice conversion algorithm based on Gaussian mixture model with dynamic frequency warping of STRAIGHT spectrum, Proceedings of the 2001 International Conference on Acoustics, Speech, and Signal Processing, pp. 841– 944, Salt Lake City.
  • 15. Toda T., Black A.W., Tokuda K. (2005), Spectral conversion based on maximum likelihood estimation considering global variance of converted parameter, Proceedings of the 2005 International Conference on Acoustics, Speech, and Signal Processing, pp. 9–12, Philadelphia.
  • 16. Valbret H., Mulines E., Tubach J. (1992), Voice transformation using PSOLA techniques, Speech Communication, 11, 2-3, 175–187.
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-242cc78a-027e-4f68-bd8a-da3704a947ea
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.