研究生: |
呂律民 Lu-Min Lu |
---|---|
論文名稱: |
人聲與掌聲驅動之音訊互動裝置開發及其應用研究 A Study on the Development of Audio Interactive Device Driven by Voice and Clap Sound with its Applications |
指導教授: |
周遵儒
Chou, Tzren-Ru |
學位類別: |
碩士 Master |
系所名稱: |
圖文傳播學系 Department of Graphic Arts and Communications |
論文出版年: | 2012 |
畢業學年度: | 100 |
語文別: | 中文 |
論文頁數: | 75 |
中文關鍵詞: | 音訊互動 、音訊效果器 、音高偵測 、節拍偵測 |
英文關鍵詞: | Audio interaction, Audio effector, Beat detection, Pitch detection |
論文種類: | 學術論文 |
相關次數: | 點閱:202 下載:5 |
分享至: |
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
在音樂表演場合中,觀眾通常被動的接收表演者所演出之音樂,這種看似單向的訊息傳遞,實際上卻隱含著某種程度的互動。觀眾會透過諸如歡呼、吶喊、鼓掌等情緒聲響回應音樂的演出,而技術純熟的表演者會透過觀眾對音樂之情緒表現,對音樂演出進行調整,藉此提高觀眾的情緒,以達到與觀眾之間的共鳴。
本研究之目的,在於開發一套以人聲以及掌聲為驅動之互動音訊效果器裝置,藉此達到音樂表演場合中,表演者與觀眾透過音樂產生互動。裝置之設計以麥克風擷取觀眾發出之情緒聲響如:歡呼、尖叫、吶喊、掌聲,將接收到之聲音訊號進行特徵偵測後,轉換為相對應之音訊效果器參數與音樂節奏參數,造成不同的音樂訊號濾波效果與節奏改變輸出。裝置設計結果採用專家訪談方式,邀請業界相關專家對互動裝置進行評估並提出改進建議,並評估互動裝置應用於音樂表演場合的可行性。藉由本研究之互動裝置的開發,觀眾能夠透過對音樂的情緒聲響即時改變表演現場的音樂回饋,藉以加強觀眾對音樂表演之參與感,以及增進音樂表演中觀眾與表演者之間的互動。
關鍵詞:音訊互動、音訊效果器、音高偵測、節拍偵測
At performance space, the audiences usually one-way receive the music played by performers. A good performer will observe audiences' emotion expression and adjust the music to improve audience's emotion.
In order to improve the interaction between performers and audiences, the purpose of this study is to develop an interactive device driven by voice and clap sound. Firstly, the input unit record audience's expression voice and clap sound with microphone; then the process unit will analysis characteristics of the sound and mapping it to parameters of audio effector and music tempo; finally the output unit will filter the ongoing music and adjust music tempo with the result of process unit. Audiences can adjust music feedback by their expression sound and clap sound at performance space. Performers and audiences could make a better musical performance and obtain more interaction together with the interactive device.
一、中文參考文獻
王次炤(1997)。音樂美學新論(第一版)。臺北市:萬象。
江亦帆(2006)。數位音樂科技整合應用。臺北市:典絃音樂文化。
宋倩如(2000)。設計互動式超媒體人機介面之文獻探討。中學教育學報,7,201-217。
林俊良(2009)。人機介面的另一選擇:聲音。自動化科技學會會刊,9,50-60。
洪欣民(2011)。麥克風陣列音訊互動裝置設計及其應用之研究(未出版之碩士論文)。國立臺灣師範大學,臺北市。
唐國豪(2003)。人與機器的對話。科學發展,368,18-23。
桂冠學術編輯室(譯)(1999)。音樂概論(原作者:H. M. Miller, P. Taylor, & E. Williams)。臺北市:桂冠圖書。
張戈、張旭(1996)。電腦音樂的製作與技巧(第一版)。北京市:清華大學。
張紹勳(2000)。研究方法。臺中市:滄海。
郭學武(譯)(2009)。人機介面互動式系統設計(原作者:D. Benyon, P. Turner, & S. Turner)。臺北市:碁峰。
胡振傳(2011)。建構在行動裝置上的小型演唱會群眾互動系統(未出版之碩士論文)。國立暨南國際大學,南投縣。
葉謹睿(2010)。互動設計概論。臺北市:藝術家。
謝朝宗(1991)。MIDI與電腦音樂。臺北市:第三波文化。
二、英文參考文獻
Benesty, J., Chen, J., & Huang, Y. (2008). Microphone array signal processing., Berlin, Germany: Springer-Verlag.
Borsook, T. K., & Higginbotham-Wheat, N. (1991). Interactivity: What is it and what can it do for computer-based instruction? Educational Technology, 31(10), 11-17.
Dubberly, H., Pangaro, P., & Haque, U. (2009). What is interaction? Are there different types? Interactions, 16(1), 69-75.
Jang, J. S. (n. d.). Audio signal processing and recognition. Retrieved from http://mirlab.org/jang/books/audioSignalProcessing/basicFeaturePitch.asp?title=5-4%20Pitch%20%28%AD%B5%B0%AA%29
Kapor, M. (1991). A software design manifesto. Dr. Dobb's Journal, 16(1), 62-67.
Kapoor, A., & Picard, R. W. (2002). Real-time, fully automatic upper facial feature tracking. Proceedings of the 5th International Conference on Automatic Face and Gesture Recognition 2002, 8-13. doi: 10.1109/AFGR.2002.1004123
Levin, G., & Liberman, Z. (2004). In situ speech visualization in real-time interactive installation and performance. Proceedings of the 3rd International Symposium on Non-Photorealistic Animation and Rendering (NPAR 2004), 7-14.
Lyons, M. J., Haehnel, M., & Tetsutani, N. (2001). The Mouthesizer: A facial gesture musical interface. In Colleen (chair.), Siggraph 2001: Electronic Art and Animation Catalog. Symposium conducted at the meeting of the Association for Computing Machinery, Los Angeles.
Lyons, M. J., & Tetsutani, N. (2001). Facing the music: A facial action controlled musical interface. Proceeding of the Human Factors in Computing Systems, 309-310. doi: 10.1145/634067.634250
Merrill, D. (2003). Head-Tracking for gestural and continuous control of parameterized audio effects. Proceedings of the NIME '03 Conference on New Interfaces for Musical Expression, 218-219.
Messick, P. (1998). Maximum MIDI music applications in C++. Fairfield,CT: Manning.
Mitra, S. K. (2001). Digital Signal Processing: A Computer-Based Approach. New York, NY: McGraw-Hill.
Negroponte, N. (1996). Being Digital. New York, NY: Vintage Books.
Norman, D. A. (2002). The design of everyday things. New York, NY: Basic Books.
Pulte, D. M. (2005). The Messa di voce and its effectiveness as a training exercise for the young singer (Doctoral dissertation, The Ohio State University). Retrieved from http://search.proquest.com/docview/305424918?accountid=14228
Reynolds, M., Schoner, B., Richards, J., Dobson, K., & Gershenfeld, N. (2001). An immersive, multi-user, musical stage environment. Proceedings of the 28th annual conference on Computer graphics and interactive techniques, 553-560. doi: 10.1145/383259.383324
Russell, S. (2009, February). Makin’ MIDI: mistralXG a USB connected, PIC-based MIDI Synthesizer. Nuts and Volts, 44-48.
Saffer, D. (2006). Designing for interaction. Berkeley, CA: New Riders Press.
Sensation (event). (2012, January, 10). In Wekipedia, the free encyclopedia. Retrieved from http://en.wikipedia.org/wiki/Sensation_%28event%29
Tung, C. M. (n. d.). NCTU-Lessons: Microphones. Retrieved from http://www.chaomingtung.info/recording/MICROPHONE.pdf
World News. (2008, December, 12). The voice painter a multimodal interface for painting with voice. Retrieved from http://wn.com/The_Voice_Painter__A_Multimodal_Interface_for_Painting_with_Voice
Yakabuski, J. (2001). Professional Sound Reinforcement Techniques: Tips and Tricks of a Concert Sound Engineer. Irvine, CA: Hal Leonard.
Yoshimoto, H., & Hori, K. (2009). Fluff: Illuminating blimps and Music. Proceedings of ACM Special Interest Group on Computer Graphics Conference and Exhibition in Asia. doi: 10.1145/1666778.1666780