PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Tytuł artykułu

Hybridisation of Mel Frequency Cepstral Coefficient and Higher Order Spectral Features for Musical Instruments Classification

Treść / Zawartość
Identyfikatory
Warianty tytułu
Języki publikacji
EN
Abstrakty
EN
This paper presents the classification of musical instruments using Mel Frequency Cepstral Coefficients (MFCC) and Higher Order Spectral features. MFCC, cepstral, temporal, spectral, and timbral features have been widely used in the task of musical instrument classification. As music sound signal is generated using non-linear dynamics, non-linearity and non-Gaussianity of the musical instruments are important features which have not been considered in the past. In this paper, hybridisation of MFCC and Higher Order Spectral (HOS) based features have been used in the task of musical instrument classification. HOS-based features have been used to provide instrument specific information such as non-Gaussianity and non-linearity of the musical instruments. The extracted features have been presented to Counter Propagation Neural Network (CPNN) to identify the instruments and their family. For experimentation, isolated sounds of 19 musical instruments have been used from McGill University Master Sample (MUMS) sound database. The proposed features show the significant improvement in the classification accuracy of the system.
Rocznik
Strony
427--436
Opis fizyczny
Bibliogr. 32 poz., rys., tab., wykr.
Twórcy
autor
  • National Institute of Technology, Warangal, India
  • National Institute of Technology, Warangal, India
autor
  • Rajarshi Shahu College of Engineering, Tathawade, Pune, India
Bibliografia
  • 1. Agostini G., Longari M., Pollastri E. (2001), Content-Based Classification of Musical Instrument Timbres, International Workshop on Content Based Multimedia Indexing.
  • 2. Agostini G., Longari M., Poolastri E. (2003), Musical instrument timbre classification with spectral features, EURASIP J. Appl. Signal Process., 1, 5–14.
  • 3. Ajmera P. K., Nehe N. S., Jadhav D. V., Holambe R. S. (2012), Robust feature extraction from spectrum estimated using Bispectrum for speaker recognition, Int. Journal of Speech Technology, 15, 433–440.
  • 4. Bhalke D. G., Rama Rao C. B., Bormane D. S. (2014), Musical Instrument Classification using Higher Order Spectra, International Conference on Signal Processing and Integrated Networks (SPIN-2014), 20–21 Feb., 2014.
  • 5. Bhalke D. G., Rama Rao C. B., Bormane D. S. (2015). Automatic Musical Instrument classification using Fractional Fourier Transform based-MFCC Features and Counter Propagation Neural Network, Journal of Intelligent Information System, Springer publication, DOI: 10.1007/s10844-015-0360-9.
  • 6. Bordoloi S., Sharmah U., Hazarika S. M. (2012), Classification of Motor imagery based on Hybrid features of Bispectrum of EEG, IEEE International Conference on Communications, Devices and Intelligent Systems (CODIS), pp. 123–116.
  • 7. Byun H., Lee S. W. (2002), Applications of support vector machines for pattern recognition, [in:] Proc. Of the International Workshop on Pattern Recognition with Support Vector Machine, pp. 213–236.
  • 8. Choudhury M. A. S. S., Shah S. L., Thornhill N. F. (2002), Detection and diagnosis of System Nonlinearities using higher order statistics, 15th Triennial World Congress, Barcelona, Spain, pp. 1–6.
  • 9. Deng J. D., Simmermacher C., Cranefield S. (2008), A study on feature analysis for Musical Instrument Classification, IEEE Transaction on Systems, Man and Cybernetics, 38, 2, 429–438.
  • 10. Dubnov S., Tishby N. (1994), Spectral Estimation using Higher Order Statistics, Proceedings of the 12th International Conference on Pattern Recognition, Jerusalem, Israel, 1994.
  • 11. Dubnov S., Tishby N. (1997), Analysis of sound textures in musical and machine sounds by means of higher order statistical features, International Conference on Acoustics, Speech, and Signal Processing, 5, 3845–3848.
  • 12. Dubnov S., Tishby N. (1998), Testing for Gaussianity and Non Linearity in the sustained portion of musical sounds, Recherches et Applications en Informatique Musicale, M. Chemillier, F. Pachet [Eds.], Editions HERMES, pp. 212–224.
  • 13. Dubnov S., Rodet X. (2003), Investigation of phase coupling phenomena in sustained portion of musical instruments sound, J. Acoust. Soc. Am., 113, 1, 348–359.
  • 14. Eronen A. (2001), Comparison of features for Musical instrument recognition, [in:] Proceeding of IEEEWorkshop on Applications of Signal Processing to Audio and Acoustics, pp. 19–22.
  • 15. Eronen A., Klapuri A. (2000), Musical Instrument Recognition using cepstral coefficients and temporal features, [in:] Proc. of IEEE International Conference on Acoustics, Speech and Signal Processing, 2, 753–756.
  • 16. Essid S., Richard G., David B. (2006), Hierarchical Classification of Musical Instruments on Solo Recordings, [in:] Proc. of IEEE International Conference on Acoustics, Speech and Signal Processing, 5, 14–19.
  • 17. Goppert J., Rosenstiel W. (1993), Self-organizing maps vs. back-propagation: An experimental study, Proc. of Work. Design Methodol. Microelectron. Signal Process., pp. 153–162.
  • 18. Goshvarpour A., Rahati S., Saadatian V. (2012), Bispectrum Estimation of Electroencephalogram Signal During Meditation, Iran J. Psychiatry Behav. Sci., 6, 2.
  • 19. Kaminskyj I., Czaszejko T. (2005), Automatic Recognition of Isolated Monophonic Musical Instrument Sounds using kNNC, Journal of Intelligent Information Systems, 24, 2, 199–221.
  • 20. Kostek B. (2004a), Musical instrument classification and duet analysis employing music information retrieval techniques, Proc. IEEE, 92, 4, 712–729.
  • 21. Kostek B. (2004b), Application of soft computing to automatic music information retrieval, Journal of American Society for Information Science and Technology, 55, 12, 1108–1116.
  • 22. Kostek B. (2007), Applying computational intelligence to musical acoustics, Archives of Acoustics, 32, 3, 617–629.
  • 23. Kostek B., Kania L. (2008), Music information analysis and retrieval techniques, Archives of Acoustics, 33, 4, 483–496.
  • 24. Kostek B., Czyzewski A. (2001), Representing musical instrument sounds for their automatic classification, Journal of Audio Engineering Society (JAES), 49, 9, 768–785.
  • 25. Kostek B., Krolikowski R. (1997), Application of artificial neural networks to the recognition of musical sounds, Archives of Acoustics, 22, 1, 27–50.
  • 26. Kostek B., Wieczorkowska A. (1997), Parametric representation of musical sounds, Archives of Acoustics, 22, 1, 3–26.
  • 27. Kuzmanovski I., Novič M. (2008), Counterpropagation neural networks in Matlab, Chemometrics and Intelligent Laboratory Systems, 90, 84–91.
  • 28. Li S., Liu Y. (2010), Feature Extraction of Lung Sounds Based on Bispectrum Analysis, Third International Symposium on Information Processing, pp. 393–397.
  • 29. Liu R., Zolzer U., Guulemard M. (2010), Excitation signature extraction for pitched musical instrument timbre analysis using Higher Order Statistics, 2010 IEEE International Conference on Multimedia and Expo (ICME), 19–23 July 2010, 10.1109/ICME.2010.5582571.
  • 30. Loughran R., Walker J., O’Neill M., O’Farrell M. (2008), The Use of Mel-frequency Cepstral Coefficients in Musical Instrument Identification, International Computer Music Conference.
  • 31. Martin K. D., Kim Y. E. (1998), Musical Instrument recognition: A pattern recognition approach, The Journal of Acoustical Society of America, 109, 1768–1768, DOI: http://dx.doi.org/10.1121/1.424083.
  • 32. Opolko F., Wapnick J. (1987), MUMS – McGill University master samples (in compact discs), McGill University, Montreal, Canada.
Uwagi
Opracowanie ze środków MNiSW w ramach umowy 812/P-DUN/2016 na działalność upowszechniającą naukę.
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-b6e5333f-41b2-4c6d-9c48-3990efb7f0fa
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.