PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Powiadomienia systemowe
  • Sesja wygasła!
  • Sesja wygasła!
Tytuł artykułu

A Key-Finding Algorithm Based on Music Signature

Treść / Zawartość
Identyfikatory
Warianty tytułu
Języki publikacji
EN
Abstrakty
EN
The paper presents the key-finding algorithm based on the music signature concept. The proposed music signature is a set of 2-D vectors which can be treated as a compressed form of representation of a musical content in the 2-D space. Each vector represents different pitch class. Its direction is determined by the position of the corresponding major key in the circle of fifths. The length of each vector reflects the multiplicity (i.e. number of occurrences) of the pitch class in a musical piece or its fragment. The paper presents the theoretical background, examples explaining the essence of the idea and the results of the conducted tests which confirm the effectiveness of the proposed algorithm for finding the key based on the analysis of the music signature. The developed method was compared with the key-finding algorithms using Krumhansl-Kessler, Temperley and Albrecht-Shanahan profiles. The experiments were performer on the set of Bach preludes, Bach fugues and Chopin preludes.
Rocznik
Strony
447--457
Opis fizyczny
Bibliogr. 48 poz., rys., tab., wykr.
Twórcy
  • Institute of Electronics, Silesian University of Technology, Akademicka 16, 44-100 Gliwice, Poland
  • Faculty of Physics, Adam Mickiewicz University, Umultowska 85, 61-614 Poznań, Poland
Bibliografia
  • 1. Albrecht J., Huron D. (2014), A statistical approach to tracing the historical development of major and minor pitch distributions, 1400-1750, Music Perception, 31, 3, 223-243.
  • 2. Albrecht J., Shanahan D. (2013), The use of large corpora to train a new type of key-finding algorithm: An improved treatment of the minor mode, Music Perception: An Interdisciplinary Journal, 31, 1, 59-67.
  • 3. Anglade A., Benetos E., Mauch M., Dixon S. (2010), Improving music genre classification using automatically induced harmony rules, Journal of New Music Research, 39, 4, 349-361.
  • 4. Bernardes G., Caetano M., Cocharro D., Guedes C. (2016), A multi-level tonal interval space for modeling pitch relatedness and musical consonance, Journal of New Music Research, 45, 4, 1-14.
  • 5. Bhalke D. G., Rajesh B., Bormane D. S. (2017), Automatic genre classification using fractional fourier transform based mel frequency cepstral coefficient and timbral features, Archives of Acoustics, 42, 2, 213-222.
  • 6. Cancino-Chacón C. E., Grachten M., Agres K. (2017), From Bach to the Beatles: The simulation of human tonal expectation using ecologically-trained predictive models, The 18th International Society for Music Information Retrieval Conference, pp. 494-501, Shuzou.
  • 7. Cancino-Chacón C. E., Lattner S., Grachten M. (2014), Developing tonal perception through unsupervised learning, The 15th International Society for Music Information Retrieval Conference, Taipei.
  • 8. Chen T.-P., Su L. (2018), Functional harmony recognition of symbolic music data with multi-task recurrent neural networks, 19th ISMIR Conference, pp. 90-97, Paris.
  • 9. Chew E. (2000), Towards a mathematical model of tonality, Ph.D. Thesis, Massachusetts Institute of Technology.
  • 10. Chew E. (2014), Mathematical and computational modeling of tonality: theory and applications, Springer, New York.
  • 11. Dawson M. (2018), Connectionist representations of tonal music: discovering musical patterns by interpreting artificial neural networks, AU Press, Canada.
  • 12. Dorochowicz A., Kostek B. (2018), A study of music features derived from audio recording examples – a quantitative analysis, Archives of Acoustics, 43, 3, 505-516.
  • 13. Gómez E., Bonada J. (2005), Tonality visualization of polyphonic audio, Proceedings of International Computer Music Conference, Barcelona, Spain.
  • 14. Gómez E. (2006), Tonal description of polyphonic audio for music content processing, INFORMS Journal on Computing, Special Cluster on Computation in Music, 18, 3, 294-304.
  • 15. Grekow J. (2017a), Audio features dedicated to the detection of arousal and valence in music recordings, IEEE International Conference on Innovations in Intelligent Systems and Applications, pp. 40-44, Gdynia.
  • 16. Grekow J. (2017b), Comparative analysis of musical performances by using emotion tracking, 23nd International Symposium, ISMIS 2017, pp. 175-184, Warsaw.
  • 17. Handelman E., Sigler A. (2013), Key induction and key mapping using pitch-class assertions, Proceedings of the 4th International Conference on Mathematics and Computation in Music, pp. 115-127, Montreal.
  • 18. Harte A. Ch., Sandler M. B. (2005), Automatic chord identification using a quantised chromagram, Proceedings of the Audio Engineering Society, Barcelona.
  • 19. Herremans D., Chew E. (2017), MorpheuS: generating structured music with constrained patterns and tension, IEEE Transactions on Affective Computing, PP, 99, 1-14.
  • 20. Huang C.-Z. A., Duvenaud D., Gajos K. Z. (2016), ChordRipple: Recommending chords to help novice composers go beyond the ordinary, Proceedings of the 21st International Conference on Intelligent User Interfaces, pp. 241-250, Sonoma.
  • 21. Krumhansl C. L. (1990), Cognitive foundations of musical pitch, pp. 77-110, Oxford University Press, New York.
  • 22. Krumhansl C. L., Kessler E. J. (1982), Tracing the dynamic changes in perceived tonal organization in a spatial representation of musical keys, Psychological Review, 89, 4, 334-368.
  • 23. Lerdahl F. (2005), Tonal pitch space, Oxford University Press, Oxford.
  • 24. Lerdahl F., Krumhansl C.L. (2007), Modeling tonal tension, Music Perception: An Interdisciplinary Journal, 24, 4, 329-366.
  • 25. Longuet-Higgins H. C., Steedman M. J. (1971), On interpreting Bach, Machine Intelligence, 6, 221-241.
  • 26. Martorell A., Gómez E. (2011), Two-dimensional visual inspection of pitch-space, many time-scales and tonal uncertainty over time, Proceedings of 3rd International Conference on Mathematics and Computation in Music, pp. 140-150, Paris.
  • 27. McVicar M., Santos-Rodriguez R., Ni Y., Bie T. (2014), Automatic Chord Estimation from Audio: A Review of the State of the Art, IEEE/ACM Transactions on Audio, Speech, and Language Processing, 22, 2, 556-575.
  • 28. Osmalskyj J., Embrechts J.-J., Pierard S., Droogenbroeck V. (2012), Neural networks for musical chords recognition, Actes des Journées d’Informatique Musicale (JIM 2012), Mons, 39-46.
  • 29. Quinn I., White Ch. (2017), Corpus-derived key profiles are not transpositionally equivalent, Music Perception, 34, 5, 531-540.
  • 30. Papadopoulos H., Peeters G. (2012), Local key estimation from an audio signal relying on harmonic and metrical structures, IEEE Transaction on Audio, Speech, and Language Processing, 20, 4, pp. 1297-1312.
  • 31. Perez-Sanchio C., Rizo D., Inesta J. M., Ramirez R. (2010), Genre classification of music by tonal harmony, Intelligent Data Analysis, 14, 5, 533-545.
  • 32. Reljin N., Pokrajac D. (2017), Music performers classification by using multifractal features: a case study, Archives of Acoustics, 42, 2, 223-233.
  • 33. Roig C., Tardón L. J., Barbancho I., Barbancho A. M. (2014), Automatic melody composition based on a probabilistic model of music style and harmonic rules, Knowledge-Based Systems, 71, 419-434.
  • 34. Rosner A., Kostek B. (2018), Automatic music genre classification based on musical instrument track separation, Journal of Intelligent Information Systems, 50, 363-384.
  • 35. Rosner A., Schuller B., Kostek B. (2014), Classification of music genres based on music separation into harmonic and drum components, Archives of Acoustics, 39, 629-638.
  • 36. Sabathé R., Coutinho E., Schuller B. (2017), Deep recurrent music writer: Memory-enhanced variational autoencoder-based musical score composition and an objective measure, Neural Networks, 2017 International Joint Conference on Neural Networks, pp. 3467-3474, Anchorage.
  • 37. Shepard R. (1982), Geometrical approximations to the structure of musical pitch, Psychological Review, 89, 305-333.
  • 38. Shmulevich I., Yli-Harja O. (2000), Localized key-finding: algorithms and applications, Music Perception, 17, 4, 531-544.
  • 39. Sigtia S., Boulanger-Lewandowski N., Dixon S. (2015), Audio chord recognition with a hybrid recurrent neural network, 16th International Society for Music Information Retrieval Conference, pp. 127-133, Malaga.
  • 40. Temperley D. (2002), A Bayesian approach to key-finding, ICMAI 2002, LNAI 2445, pp. 195-206.
  • 41. Temperley D. (2004), Bayesian models of musical structure and cognition, Musicae Scientiae, 8, 2, 175-205.
  • 42. Temperley D. (2007), Music and probability, Massachusetts Institute of Technology Press.
  • 43. Temperley D., Marvin E. (2008), Pitch-class distribution and the identification of key, Music Perception, 25, 3, 193-212.
  • 44. Toiviainen P., Krumhansl C. L. (2003), Measuring and modeling real-time responses to music: the dynamics of tonality induction, Perception, 32, 741-766.
  • 45. Tverdokhleb E., Myronova N., Fedoronchak T. (2017), Music signal processing to obtain its chorded representation, 4th International Scientific-Practical Conference Problems of Infocommunications, Kharkov.
  • 46. Tymoczko D. (2006), The geometry of musical chords, Science, 313, 5783, 72-74.
  • 47. Wu Y., Li W. (2018), Music Chord Recognition Based on Midi-Trained Deep Feature and BLSTM-CRF Hybrid Decoding, International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 376-380, Calgary.
  • 48. Zhou X.-H., Lerch A. (2015), Chord Detection Using Deep Learning, Proceedings of SIMIR 2015, pp. 52-58, Malaga.
Uwagi
Opracowanie rekordu w ramach umowy 509/P-DUN/2018 ze środków MNiSW przeznaczonych na działalność upowszechniającą naukę (2019).
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-34bcfda8-f6c5-4c4c-89b5-3236370ccad1
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.