PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Tytuł artykułu

Sofcomputing approach to melody generation based on harmonic analysis

Treść / Zawartość
Identyfikatory
Warianty tytułu
Języki publikacji
EN
Abstrakty
EN
This work aims to create an ANN-based system for a musical improviser. An artificial improviser of "hearing" music will create a melody. The data supplied to the improviser is MIDItype musical data. This is the harmonic-rhythmic course, the background for improvisation, and the previously made melody notes. The harmonic run is fed into the system as the currently ongoing chord and the time to the next chord, while the supplied few dozen notes performed earlier will indirectly carry information about the entire run and the musical context and style. Improvisation training is carried out to check ANN as a correctlooking musical improvisation device. The improviser generates several hundred notes to be substituted for a looped rhythmicharmonic waveform and examined for quality.
Twórcy
  • Wrocław University of Science and Technology, Faculty of Information and Communication Technology, Department of Computer Engineering
Bibliografia
  • [1] J. P. Briot, F. Pachet, “Deep learning for music generation: challenges and directions”, Neural Comput. Appl., vol. 32, no. 4, pp. 981-993, 2020.
  • [2] J. P. Briot, G. Hadjeres, F. D. Pachet, “Deep Learning Techniques for Music Generation”, Springer Nature Switzerland AG, 2020.
  • [3] D. Herremans, C. H. Chuan, E. Chew, “A functional taxonomy of music generation systems”, ACM Comput. Surv. (CSUR), vol. 50, no. 5, pp. 1-30, 2017.
  • [4] C. F. Huang, C. Y. Huang, “Emotion-based AI music generation system with CVAE-GAN”, in 2020 IEEE Eurasia Conference on IOT, Communication and Engineering (ECICE), pp. 220-222, 2020.
  • [5] T. M. Association, “Standard MIDI Files (SMF) specification”, https://www.midi.org/specifications-old/item/standard-midi-files-smf, 2020.
  • [6] Y. Bengio, P. Simard, P. Frasconi, “Learning long-term dependencies with gradient descent is difficult”, IEEE Trans. Neural Networks, vol. 5, no. 2, pp. 157-166, 1994.
  • [7] K. Zhao, S. Li, J. Cai, H. Wang, J. Wang, “An emotional symbolic music generation system based on LSTM networks”, in: 2019 IEEE 3rd Information Technology, Networking, Electronic and Automation Control Conference (ITNEC), pp. 2039-2043, 2019.
  • [8] A. Karpathy, “The unreasonable effectiveness of recurrent neural network”, https://karpathy.github.io/2015/05/21/rnn-effectiveness/, 2015.
  • [9] H. G. Zimmermann, R. Neuneier, “Neural network architectures for the modeling of dynamical systems”, in: A Field Guide to Dynamical Recurrent Networks, pp. 311-350, IEEE Press, Los Alamitos, 2001.
  • [10] S. Mangal, R. Modak, P. Joshi, “LSTM based music generation system”, arXiv doi:10.17148/IARJSET.2019.6508, 2019.
  • [11] S. Hochreiter, J. Schmidhuber, “Long short-term memory”, Neural Comput. vol. 9, no. 8, pp. 1735-1780, 1997.
  • [12] K. Greff, R. K. Srivastava, J. Koutnik, B. R. Steunebrink, J. Schmidhuber, “LSTM: a search space odyssey”, IEEE Trans. Neural Netw. Learn. Syst., vol. 28, pp. 2222-2232, 2017.
  • [13] A. Everest, K. Pohlmann, “Master Handbook of Acoustics”, 5th ed. edition, New York: McGraw-Hill, 2009.
  • [14] B. Thom, "Unsupervised Learning and Interactive Jazz/Blues Improvisation," in American Association for Artificial Intelligence, 2000.
  • [15] I. Simon, D. Morris, and S. Basu, "Exposing Parameters of a Trained Dynamic Model for Interactive Music Creation," in Association for the Advancement of Artificial Intelligence, 2008.
  • [16] C. Schmidt-Jones, “Understanding Basic Music Theory”, Rice University, Houston, Texas: Connexions, 2007.
  • [17] P. Ponce, J. Inesta, “Feature-Driven Recognition of Music Styles”, Lecture Notes in Computer Science 2652, pp. 773-781, 2003. htdoi:10.1007/978-3-540-44871-6_90
  • [18] J. Mazurkiewicz, “Softcomputing Approach to Music Generation”, in: Dependable Computer Systems and Networks. DepCoS-RELCOMEX 2023. Lecture Notes in Networks and Systems, vol 737, pp. 149-161, Springer, Cham, 2023. https://doi.org/10.1007/978-3-031-37720-4_14
Uwagi
Opracowanie rekordu ze środków MNiSW, umowa nr SONP/SP/546092/2022 w ramach programu "Społeczna odpowiedzialność nauki" - moduł: Popularyzacja nauki i promocja sportu (2024).
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-9a2e6ae3-dc4b-4d93-98b9-81e9f4af1828
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.