簡易檢索 / 詳目顯示

研究生: 張書堯
Chang, Shu-Yao
論文名稱: 自動風格轉化之節奏變奏 – 以流行樂為例
Automatic style transferring by rhythmic variation—pop song as example
指導教授: 葉梅珍
Yeh, Mei-Chen
學位類別: 碩士
Master
系所名稱: 資訊工程學系
Department of Computer Science and Information Engineering
論文出版年: 2017
畢業學年度: 105
語文別: 中文
論文頁數: 32
中文關鍵詞: 音樂變奏電腦編曲機率模型馬爾可夫模型
英文關鍵詞: Music variation, computer composition, probabilistic model, Markov model
DOI URL: https://doi.org/10.6345/NTNU202202209
論文種類: 學術論文
相關次數: 點閱:88下載:8
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 電腦編曲是被研究已久的題目,累積了相當多的研究方法,其中電腦變奏因其多變性,至今沒有一套系統能夠解釋所有的變奏,因此找出音樂變奏的通則成為一項艱鉅的挑戰。在變奏方法不唯一的情況下,此論文根據樂理提出一個方法,我們考慮作曲家創作的手法,採用最小重複單位。而節奏的最小重複單位為動機或主題,但因為存在計算誤差,加上考慮現代音樂節奏多採用相似並帶有變化性重複, 因此採用小節作為重複的基礎單位。我們觀察節拍之間轉移的機率和小節之間的轉移機率,建立了馬爾可夫模型,計算在輸入音樂節奏的結構框架限制下,找出最合適的節奏片段作為節奏變奏。

    Computer composition has been studied for a long time, in which many methods have been proposed. In particular, in the task of computational variation, finding a general procedure to explain all types of variation is challenging due to the lack of the definition of variation. There are many methods to generate variation, but none of each can be applied to create all types of variation. In this thesis, we propose a systematic method to generate rhythmic variation based on theoretic composition. More specifically, we use the concept of the minimum repeated pattern in rhythm (theme or motif repeats in various way). Although using theme and motif can generate variation, we use the meter to be a minimum repeated pattern, considering the error caused by motif extraction in modern technique of composition. We further compute the probability of beat and rhythm transitions and build the Markov model, upon which we find frequent rhythm patterns (i.e., paths in the model) under the music structure constraints. Finally, we propose a method to evaluate the performance of the proposed style transfer system.

    Abstract i 附圖目錄 iv 附表目錄 v 第一章 簡介 1 1.1 研究背景動機 1 1.2 研究方法 2 1.3 系統架構 3 1.4 文章架構 5 第二章 文獻探討 6 2.1 電腦編曲 6 2.2 電腦變奏 9 第三章 資料集建製與處理 11 3.1 拍值計算 14 3.2 節奏計算 15 第四章 建立節奏轉換之模型 18 4.1基於節奏變換之馬爾可夫模型 18 4.2 基於音樂結構之馬爾可夫鍊 20 第五章 實驗結果 22 5.1 實驗設置 22 5-2 實驗結果以及討論 26 第六章 結論 29 參考文獻 31

    [1] W. B. de Haas, A. Volk, and P. van Kranenburg. Towards modelling variation in
    music as a foundation for similarity. In Proceedings of the 12th International Conference of Music Perception and Cognition (ICMPC), pp. 1085-1094, 2012.
    [2] T. Tsubasa, N. Takuya, O. Nobutaka, S. Shigeki. Automatic music composition based on counterpoint and imitation using stochastic models. Sounds and Music Composing (SMC) Conference, 2010.
    [3] C. David. Experiments in music intelligence (EMI). In Proceedings of the 12th International Conference of Music Perception and Cognition (ICMPC), 1987.
    [4] Arnold Schoenberg. Fundamental and music compositions. Faber and Faber, 1967.
    [5] P. P. Cruz-Alcázar, and E. Vidal-Ruiz. Learning regular grammars to model musical style: Comparing different coding schemes. In Proceedings of the International Colloquium on Grammatical Inference, pp. 211–222, 1998.
    [6] G. M. Rader. A method for composing simple traditional music by computer. Communications of the Association of Computing Machinery (ACM), volume 17 issue 11, pp. 631–638, 1974.
    [7] A. Charles. The Markov process as a compositional model: A survey and tutorial. The MIT Press, Leonardo, volume 22, no. 2, pp. 175-187, 1989.
    [8] S. Avinash. N-gram modeling of tabla sequences using variable-length hidden Markov models for improvisation and composition. Technique report in Georgia Tech Center for Music Technology, 2011.
    [9] M. Bill, R. Patrick, M. Penpusal, K. Dwight, P. Luca and R. Juan. Corpus-based hybrid approach to music analysis and composition. Association for the Advancement of Artificial Intelligence (AAAI), 2007.
    [10] A. Judy. Franklin jazz melody generation from recurrent network learning of several human melodies. Association for the Advancement of Artificial Intelligence (AAAI), 2005.
    [11] H. Gaetan, P. Franc ois, N. Frank. DeepBach: A steerable model for Bach chorales generation. arXiv preprint arXiv:1612.01010v2 [cs.AI], 17 Jun 2017.
    [12] R.U. Nelson. The technique of variation; a study of the instrumental variation from antonio de Cabezón to max Reger. University of California Publications in Music 3. Berkeley: University of California Press, 1948.
    [13] O Lartillot, M Ayari. Motivic pattern extraction in music, and application to the study of tunisian modal music. Institut de Recherche et Coordination Acoustique/Musique (IRCAM), 2007.
    [14] L. Olivier. Efficient extraction of closed motivic patterns in multi-dimensional symbolic representations of music. The International Society of Music Information Retrieval (ISMIR), 2005.
    [15] T. Douglas, L. Gert. A supervised approach for detecting boundaries in music using difference features and boosting. The International Society of Music Information Retrieval (ISMIR), 2007.
    [16] B. Frédéric, D. Emmanuel, S. Gabriel, V. Emmanuel. Methodology and resources for the structural segmentation of music pieces into autonomous and comparable blocks. The International Society of Music Information Retrieval (ISMIR), 2011.
    [17] E. P. Flavio Omar, A logical approach for melodic variations. In Proceedings of the Tenth Latin American Workshop on Logic/Languages, Algorithms and New Methods of Reasoning (LANMR), 2011.
    [18] O. Giyasettin, I. Cihan, A. Adil. Melody extraction on MIDI music files. In Proceedings of the Seventh IEEE International Symposium on Multimedia (ISM), 2005.
    [19] S. Justin, G. Emilia, P.W. Daniel, and Gaël Richard. Melody extraction from polyphonic music signals. IEEE Signal Processing Magazine, volume 31, p118, 2014.
    [20] K. Sangeun, O. Changheun, N. Juhan. Melody extraction using multi-column deep neural networks. Music Information Retrieval Evaluation eXchange (MIREX), 2016.
    [21] R. B. Zajonc. Attitudinal effects of mere exposure. Journal of Personality and Social Psychology 9(JPSP), pp. 1-27, 1965.
    [22] R. Carles, T. J. Lorenzo, B. Isabel, M. B. Ana. Automatic melody composition based on a probabilistic model of music style and harmonic rules. Knowledge Based System 71, pp. 419-434, 2014.
    [23] E. Ahmed, L. Bingchen, E. Mohamed, M. Marian, The Art and AI Laboratory - Rutgers University. CAN: Creative adversarial networks generating “art” by learning about styles and deviating from style norms*. arXiv preprint arXiv:1706.07068v1 [cs.AI], 21 Jun 2017.

    下載圖示
    QR CODE