PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Tytuł artykułu

Music Mood Visualization Using Self-Organizing Maps

Autorzy
Treść / Zawartość
Identyfikatory
Warianty tytułu
Języki publikacji
EN
Abstrakty
EN
Due to an increasing amount of music being made available in digital form in the Internet, an automatic organization of music is sought. The paper presents an approach to graphical representation of mood of songs based on Self-Organizing Maps. Parameters describing mood of music are proposed and calculated and then analyzed employing correlation with mood dimensions based on the Multidimensional Scaling. A map is created in which music excerpts with similar mood are organized next to each other on the two-dimensional display.
Rocznik
Strony
513--525
Opis fizyczny
Bibliogr. 69 poz., rys., tab., wykr.
Twórcy
autor
  • Audio Acoustics Laboratory, Faculty of Electronics, Telecommunications and Informatics, Gdansk University of Technology, Narutowicza 11/12, 81-233 Gdańsk, Poland
autor
  • Audio Acoustics Laboratory, Faculty of Electronics, Telecommunications and Informatics, Gdansk University of Technology, Narutowicza 11/12, 81-233 Gdańsk, Poland
Bibliografia
  • 1. AMAZON, http://www.amazon.com/
  • 2. Barbedo J.G.A., Lopes A. (2005), A New Cognitive Model for Objective Assessment of Audio Quality, J. Audio Eng. Soc., 53, 1/2, 22–31; February.
  • 3. Bigand E., Vieillard S., Madurell F., Marozeau J., Dacquet A. (2005), Multidimensional scaling of emotional responses to music: The effect of musical expertise and of the duration of the excerpts, Cognition & Emotion, 19, 8, 1113–1139.
  • 4. Borg I., Groenen P. (2007), Modern Multidimensional Scaling: Theory and Applications, Springer, Germany.
  • 5. Casey M.A., Veltkamp R., Goto M., Leman M., Rhodes C., Slaney M. (2008), Content-Based Music Information Retrieval: Current Directions and Future Challenges, Proc. of the IEEE, 96, 4, April.
  • 6. Cruz R., Brisson A., Paiva A., Lopes E. (2007), I-Sounds – Emotion-based Music Generation for Virtual Environments, Springer-Verlag, Berlin Heidelberg, 769–770.
  • 7. Drossos K., Floros A., Kermanidis K.L. (2015), Evaluating the Impact of Sound Events’ Rhythm Characteristics to Listener’s Valence, J. Audio Eng. Soc., 63, 3, 139–153; March.
  • 8. Frühwirth M. (2001), Automatische Analyse und Organisation von Musikarchiven [in German], Wien.
  • 9. Frühwirth M., Rauber A. (2001), Self-Organizing Maps for Content-Based Music Clustering, Proceedings of the 12th Italian Workshop on Neural Nets, Vietri sul Mare, Salerno, Italy.
  • 10. Hevner K. (1936), Experimental studies of the elements of expression in music, American Journal of Psychology, 48, 246–268.
  • 11. Hoffmann P., Kostek B. (2014), Music Data Processing and Mining in Large Databases for Active Media, Active Media Technology, LNCS, 8610, 85–95, Springer.
  • 12. Huron D. (2000), Perceptual and cognitive applications in music retrieval, Proc. Int. Conf. MIR.
  • 13. ISMIR, Intern. Conference on Music Information Retrieval website (http://ismir2015.ismir.net).
  • 14. ITUNES, https://www.apple.com/pl/itunes/ (accessed August 2015).
  • 15. Jolliffe I.T. (2002), Principal Component Analysis, Series: Springer Series in Statistics, 2nd ed., Springer, NY.
  • 16. Kaminsky I. (1995), Automatic source identification of monophonic musical instrument sounds, Proceedings., IEEE International Conference on Neural Networks.
  • 17. Kim Y.E., Schmidt E., Emelle L. (2008), MoodSwings: A collaborative game for music mood label collection, [in:] ISMIR, Philadelphia, PA, September 2008.
  • 18. Kim Y.E., Schmidt E.M., Migneco R., Morton B.G., Richardson P., Scott J., Speck J.A., Turnbull D. (2010), Music Emotion Recognition: A State of the Art Review, 11th International Society for Music Information Retrieval Conference (ISMIR 2010), 255–266.
  • 19. Kohonen T. (1984), Self Organized Formation of Topologically Correct Feature Maps, Biol. Cybern., 43, 59–69.
  • 20. Kohonen T. (1984), Self-Organisation and Associative Memory, Springer Verlag.
  • 21. Kohonen T., Honkela T. (2007), Kohonen Network, Scholarpedia.
  • 22. Kostek B. (2011), Content-Based Approach to Automatic Recommendation of Music, 131st Audio Eng. Soc. Convention, October 20–23, New York.
  • 23. Kostek B. (2013), Music Information Retrieval in Music Repositories, Chapter 17, [in:] Rough Sets and Intelligent Systems, Skowron A., Suraj Z., [Eds.], 1, ISRL, 42, 463–489, Springer Verlag, Berlin Heidelberg.
  • 24. Kostek B. (2014), Auditory Display Applied to Research in Music and Acoustics, Archives of Acoustics, 39, 2, 203–214, DOI 10.2478/aoa-2014-0025.
  • 25. Kostek B., Czyzewski A. (2001), Representing Musical Instrument Sounds for Their Automatic Classification, J. Audio Eng. Soc., 49, 9, 768–785.
  • 26. Kostek B., Hoffmann P., Kaczmarek A., Spaleniak P. (2013), Creating a Reliable Music Discovery and Recommendation System, Intelligent Tools for Building a Scientific Information Platform: From Research to Implementation, Springer Verlag
  • 27. Kostek B., Kaczmarek A. (2013), Music Recommendation Based on Multidimensional Description and Similarity Measures, Fundamenta Informaticae, 127, 1–4, 325–340, DOI 10.3233/FI-2013-912.
  • 28. Kostek B., Kupryjanow A., Zwan P., Jiang W., Raś Z., Wojnarski M., Swietlicka J. (2011), Report of the ISMIS 2011 Contest: Music Information Retrieval, Foundations of Intelligent Systems, ISMIS 2011, Kryszkiewicz M. et al. [Eds.], LNAI 6804, 715–724, Springer Verlag, Berlin, Heidelberg.
  • 29. Laurier C., Sordo M., Serra J., Herrera P. (2009), Music Mood Representations from Social Tags, Proceedings of the 10th International Society for Music Information Conference, Kobe, Japan, 381–386.
  • 30. Lewis C.I. (1929), Mind and the World Order, New York.
  • 31. Lima M.F.M., Machado J.A.T., Costa A.C. (2012), A Multidimensional Scaling Analysis of Musical Sounds Based on Pseudo Phase Plane, Appl. Anal., Special Issue (2012), Article ID 436108.
  • 32. Lu L., Liu D., Zhang H.J. (2006), Automatic mood detection and tracking of music audio signals, IEEE Trans. Audio Speech Language Processing, 14, 1, 5–18, Jan.
  • 33. Małecki P. (2013), Evaluation of objective and subjective factors of highly reverberant acoustic field [in Polish], Doctoral Thesis, AGH University of Science and Technology, Krakow.
  • 34. Markov K., Matsu T. (2014), Music Genre and Emotion Recognition Using Gaussian Processes, 2169–3536 IEEE, 2, 688–697.
  • 35. MATLAB, Neural Network Toolbox, available at www.mathworks.com/help/pdf doc/nnet/nnet ug.pdf (accessed January, 2015).
  • 36. MIREX (2009), Mood Multi Tag Data Description, http://www.music-ir.org/archive/papers/Mood Multi Tag Data Description.pdf
  • 37. Mostafavi A.C., Ras Z., Wieczorkowska A. (2013), Developing Personalized Classifiers for Retrieving Music by Mood, [in:] New Frontiers in Mining Complex Patterns NFMCP 2013, International Workshop, held at ECML-PKDD 2013, 27th September 2013, Prague, Czech Republic.
  • 38. MUFIN system; http://www.mufin.com/us/ (accessed August 2015).
  • 39. MUSICOVERY system; http://musicovery.com/ (accessed August 2015).
  • 40. Novello A., Mckinney M.M.F., Kohlrauschab A. (2011), Perceptual Evaluation of Inter-song Similarity in Western Popular Music, J. New Music Research, 40, 1, 1–26.
  • 41. Palomäki K., Pulkki, V., Karjalainen M. (1999), Neural Network Approach to Analyze Spatial Sound, AES 16th International Conference On Spatial Sound Reproduction, Finland.
  • 42. Pampalk E. (2001), Islands of Music, Analysis, Organization, and Visualization of Music Archives, Master Thesis, Vienna Technical University.
  • 43. Pampalk E., Rauber A., Merkl D. (2002), Using Smoothed Data Histograms for Cluster Visualization in Self-Organizing Map, Proceedings of the International Conference on Artificial Neural Network, 871–876.
  • 44. Panda R., Paiva R.P. (2011), Using Support Vector Machines for Automatic Mood Tracking in Audio Music, 130th Audio Eng. Soc. Convention, Paper No. 8378, London, UK, May 13–16.
  • 45. PANDORA – Internet Radio: http://www.pandora.com
  • 46. Papaodysseus C., Roussopoulos G., Fragoulis D., Panagopoulos A., Alexiou C. (2001), A New Approach to the Automatic Recognition of Musical Recordings, J. Audio Eng. Soc., 49, 1/2, 23–35; February.
  • 47. Plewa M., Kostek B. (2013), Multidimensional Scaling Analysis Applied to Music Mood Recognition, 134 Audio Eng. Soc. Convention, May 4–7, Paper No. 8876, Rome.
  • 48. Raś Z., Wieczorkowska A. (2010), Advances in Music Information Retrieval, Series: Studies in Computational Intelligence, Vol. 274, Springer-Verlag, Berlin Heidelberg 2010, ISBN: 978-3-642-11673-5.
  • 49. Rauber A., Frühwirth M. (2001), Automatically Analyzing and Organizing Music Archives, [in:] Proceedings of the 5. European Conference on Research and Advanced Technology for Digital Libraries (ECDL 2001), Sept. 4–8 2001, Darmstadt, Germany, Springer Lecture Notes in Computer Science, Springer.
  • 50. Rauber A., Pampalk M., Merkl D. (2002a), Content-based Music Indexing and Organization, Proc. 25 Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 02), 409–410, August 11–15, Tampere, Finland.
  • 51. Rauber A., Pampalk M., Merkl D. (2002b), Using Psycho-Acoustic Models and Self-Organizing Maps to Create a Hierarchical Structuring of Music by Musical Styles, Proceedings of the 3rd International Symposium on Music Information Retrieval (ISMIR 2002), 71–80, October 13–17, Paris, France.
  • 52. Rojas R. (1996), Neural Networks: A Systematic Introduction, Berlin, Springer.
  • 53. Rosner A., Schuller B., Kostek B. (2014), Classification of Music Genres Based on Music Separation into Harmonic and Drum Components, Archives of Acoustics, 39, 4, 629-638, DOI: 10.2478/aoa-2014-0068.
  • 54. Smith L. (2002), A tutorial on Principal Components Analysis, available at http://www.cs.otago.ac.nz/cosc453/student tutorials/principal components.pdf (accessed May, 2015).
  • 55. Schmidt E.M., Kim Y.E. (2010), Prediction of time-varying musical mood distributions using Kalman filtering, Proceedings of the 2010 IEEE International Conference on Machine Learning and Applications, Washington, D.C.: ICMLA.
  • 56. Schubert E. (2003), Update of the Hevner adjective checklist, Perceptual and Motor Skills, 96, 1117–1122.
  • 57. Rumsey F. (2011), Semantic Audio: Machines Get Clever with Music, J. Audio Eng. Soc., 59, 11, 882–887, April.
  • 58. Rumsey F. (2014), About Semantic Audio, J. Audio Eng. Soc., 62, 4, 281–285, April.
  • 59. Tadeusiewicz R. (1993), Sieci neuronowe, Akademicka Oficyna Wydawnicza, Warszawa.
  • 60. Thayer R.E. (1989), The Biopsychology of Mood and Arousal, Oxford University Press.
  • 61. Trochidis K., Delbé. C., Bigand E. (2011), Investigation of the relationships between audio features and induced emotions in Contemporary Western music, SMC Conference.
  • 62. Tuzman A. (2001), Wavelet And Self-Organizing Map Based Declacker, Paper No. 1959, 20th International Conference: Archiving, Restoration, and New Methods of Recording.
  • 63. Tzanetakis G., Cook P. (2002), Musical genre classification of audio signal, IEEE Transactions on Speech and Audio Processing, 10, 3, 293–302.
  • 64. Ultsch A. (2003), U*-Matrix: A tool to visualize clusters in high dimensional data, Department of Computer Science, University of Marburg, Technical Report Nr. 36:1–12.
  • 65. UMETRIX (2015), http://umetrics.com/sites/default/files/books/sample chapters/multimega parti-3 0.pdf, access 10.06.2015.
  • 66. Wagenaars W.M., Houtsma A.J., Van Lieshout R.A. (1986), Subjective Evaluation of Dynamic Compression in Music, J. Audio Eng. Soc., 34, 1/2, 10–18; February.
  • 67. Wieczorkowska A., Kubera E., Kubik-Komar A. (2011), Analysis of Recognition of a Musical Instrument in Sound Mixes Using Support Vector Machines, Fundamenta Informaticae, 107, 1.
  • 68. XLStat (2015), http://www.xlstat.com (accessed June 2015).
  • 69. Zentner M., Grandjean D., Scherer K. (2008), Emotions evoked by the sound of music: Characterization, classification, and measurement, Emotion, 8, 494–521.
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-cabd7326-8652-46c6-a44b-83593b1a0b5c
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.