研究生: |
蔡侑廷 You-Ting Tsai |
---|---|
論文名稱: |
以觀眾為拍攝主體之虛擬攝影師系統 An Automatic Virtual Cameraman System for Audience |
指導教授: |
方瓊瑤
Fang, Chiung-Yao |
學位類別: |
碩士 Master |
系所名稱: |
資訊工程學系 Department of Computer Science and Information Engineering |
論文出版年: | 2013 |
畢業學年度: | 101 |
語文別: | 中文 |
論文頁數: | 86 |
中文關鍵詞: | 攝影師 、運鏡 、美學 、自動機 |
英文關鍵詞: | cameraman, camera movement, aesthetics, automata |
論文種類: | 學術論文 |
相關次數: | 點閱:162 下載:3 |
分享至: |
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
本研究的目的乃在於建立以觀眾為拍攝主體之虛擬攝影師系統。由於人力成本的不斷提高,聘請商業攝影團隊為演講或會議拍攝影片對一般預算有限的非營利學校或公司行號是一個沉重的負擔。若為了節省成本而讓缺乏攝影經驗的人來掌鏡,往往會讓影片缺乏美感及流暢度,甚至是降低觀賞的意願。本研究所提出之系統除了能夠節省人力開銷,還提供專業攝影技巧以製作高規格影片。
本研究以兩台PTZ攝影機作為實驗設備,架設於拍攝場地的左或右前方,兩台PTZ攝影機具有不同的功能,一台為全景PTZ攝影機,另一台為運鏡PTZ攝影機。全景PTZ攝影機代表攝影師的雙眼,將拍攝場地的全景的連續影像輸入到系統協助系統的全景監控與主體偵測。運鏡PTZ攝影機則是攝影師手上的攝影機,在系統決定運鏡所需之一切資訊後,運鏡PTZ攝影機就會實際執行運鏡動作。
本系統的功能為模仿攝影師的拍攝技巧並自動進行運鏡動作,每次運鏡皆需要運鏡的種類、景別、主體等要素。系統從全景PTZ攝影機輸入連續影像,從中擷取四種具描述觀眾行為的motion特徵。接著將這些特徵經過模糊化(fuzzifierion)處理讓這些數值轉換成自然語言的表達方式,以便於分析攝影師的運鏡習慣。接著將這些特徵輸入自動運鏡模型(automatic camera movement model),該模型能夠紀錄專業攝影師的運鏡習慣,並依據輸入的特徵輸出適合該情況的運鏡以及景別種類。拍攝主體的挑選則以五種分別代表不同美感的特徵作為判斷標準。在完成運鏡要素的計算後,將資料傳送至運鏡PTZ攝影機執行運鏡動作。
實驗結果顯示,本系統可進行即時且流暢的運鏡動作且具專業攝影的運鏡水準,符合演講錄製的需求。
This study proposed an automatic virtual cameraman system for audience. Because of increasing of personnel costs, to employ commercial photography team filming speeches or conferences is a heavy burden for the nonprofit school or company line number that budget limited. If In order to save costs and let person who lack of photography to shoot, it often causes the videos which not smooth and aesthetics. Our system not only can save personnel costs, but makes professional videos with photographic.
System equipment includes two PTZ web-cameras which are located right front or left front of the lecture theatre. These two PTZ web-cameras have different functions. One, which named full-shot PTZ camera, grabs auditorium screen and input continuous images to system. Another one , which named camera movement PTZ camera, do camera movement after system computes all necessary camera movement information.
Camera movement information includes camera movement class, shot class, and subject. To get this information, system first input continuous images from full-shot PTZ camera. Then extracting four motion features which represent four kinds of audiences’ behavior. For combining motion features and natural language, system fuzzifier motion features. To decision camera movement and shot, we construct automatic camera movement model (ACMM), an automata model. ACMM records photographers’ habit of camera movement and shot. It can pick up suitable camera movement and shot by input fuzzy motion features. After that, system will choose subject by using four aesthetic features, which are continuous, repeated, luminance and composition. Last, system operates camera movement PTZ camera to finish recording.
In the experiment, there are eleven kinds of camera movement and six kinds of shot in the ACMM. And ACMM was trained with six lecture videos. Compared to amateur’s recording and point shooting, our system is really performance well than them. Beside, system can real time work, so it won’t lose any shot on the auditorium.
[Che07] I. Cherif, V. Solachidis, and I. Pitas, “Shot Type Identification of Movie Content,” In Proceedings of 9th International Symposium on Signal Processing and Its Applications, pp. 1-4, Sharjah, United Arab Emirates Feb. 2007.
[Xu12] L. Xu, H. Li, and Z. Wang, “Saliency Detection from Joint Embedding of Spatial and Color Cues,” In Proceedings of IEEE International Symposium on Circuits and Systems, pp. 2673-2676, Seoul, Korea,May 2012.
[Ber06] A. Berengolts and M. Lindenbaum, “On the Distribution of Saliency,” In Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 2, pp. 543-549, Washington, USA, Jun. 2004.
[Dor12] M. Dorr, E. Vig, and E. Barth “Colour saliency on video,” In Proceedings of 5th International Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering (ICST) Conference, BIONETICS 2010, pp. 601-606, Boston, USA, Dec. 2010.
[Liu11] Z. Liu, Y. Xue, H. Yan, and Z. Zhang, “Efficient saliency detection based on Gaussian models,” IET Image Processing, vol.5, no.2, pp. 122-131, Mar. 2011.
[Ban07] S. Banerjee and B. L. Evans, “In-Camera Automation of Photographic Composition Rules,” IEEE Transactions on Image Processing, vol. 16, no.7, pp. 1807-1820, Jul. 2007.
[Li09] C. Li and T. Chen, “Aesthetic Visual Quality Assessment of Paintings,” Journal of Selected Topics in Signal Processing, vol.3, no. 2, pp. 236-252, Apr. 2009.
[Lia12] Y. Liang, Z. Su, C. Wang, D. Wang, and X. Luo, “Optimised image retargeting using aesthetic-based cropping and scaling,” In Proceedings of IET Image Processing, vol. 7, no. 1, pp. 61-69, Feb. 2013.
[Su12] T. W. Chen, C. C. Kao, W. J. Hsu, and S. Y. Chien, “Preference-Aware View Recommendation System for Scenic Photos Based on Bag-of-Aesthetics-Preserving Feature,” IEEE Transactions on Multimedia, vol. 14, no. 3, pp.833-843, Jun. 2012.
[Niu12] Y. Niu and F. Liu, “What Makes a Professional Video A Computational Aesthetics Approach,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 22, pp. 1037-1049, Piscataway, USA, Jul. 2012.
[Bov01] Y. Boykov, O. Veksler, and R. Zabih, “Efficient approximate energy minimization via graph cuts,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 20, no. 12, pp. 1222-1239, Nov. 2001.
[Chu09] S. K. Chung and Y. C. Chang, “Cinematic Continuity in 3D Computer Animation – the Tendency and Expression of Shot in Hollywood’s Blockbuster Animation Films,” 27th National Kaohsiung Normal University Journal, pp. 195-217, Dec. 2009.
[Vio04] P. Viola and M. Jones “Robust Real-Time Face Detection,” International Journal of Computer Vision, vol. 57, pp. 747, May 2004.
[Kha09] O. O. Khalifa, A. A. M. Assidiq, and A. H. A. Hashim, “Vision-Based Lane Detection for Autonomous Artificial Intelligent Vehicles,” In Proceedings of IEEE International Conference on Semantic Computing, pp.636-641, Gombak, Malaysia, Oct. 2009.
[Bob01] A. F. Bobick and J.W. Davis, “The recognition of human movement using temporal templates,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 23, no. 3, pp. 257-267, Mar. 2001.
[JHO12] Z. R. JHOU and W. R. JIAN, “色彩調和理論與網頁自動配色應用之研究,” In Proceedings of 8th International Conference on Digital Content, Tainan, Taiwan, Dec. 2012.
[Lee04] F. R. Lee, “A Study of the Legibility of Texts in Small Scale TFT-LCD for the Elderly,” Jun. 2004.
[1] 【攝影入門教學講義一】淺談構圖原理與主題表現
http://www.wretch.cc/blog/jhyou/13908453
[2] 怎麼拍,才精采?基本構圖法演練實戰
http://www.eprice.com.tw/dc/talk/356/723/1/
[3] 攝影教室—【風景攝影】
http://chang-home.myweb.hinet.net/Classroom/Scenery/page_3.html
[4] 簡單構圖與基礎攝影
http://blog.duncan.idv.tw/BasicPhotographicComposition.pdf
[5] 拍出不同戲劇效果─善用不同的光向
http://www.fotobeginner.com/3789/light-direction/
[6] 李秀美,鏡頭語言與採訪技巧--如何將文字視覺化(PPT)
[7] 景深的介紹
http://cck0217.myweb.hinet.net/TNGNEW-7.html
[8] 光圈與景深的關係
http://dawing888.com/photo01.asp
[9] 攝影入門之新手必讀─色彩的奧秘
http://www.sj33.cn/dphoto/syxt/201211/32597.html
[10] 利用色彩輕鬆拍攝漂亮照片
http://shijue.me/show_text/4ffef3e0ac1d840d9001c13c
[11] 長沙印象寶貝專業兒童攝影
http://dcps.blog.hexun.com.tw/42473850_d.html
[12] 分鏡的設計(PPT)
[13] 什麼是畫面的景別?
http://hdtv.baike.com/article-87679.html
[14] 攝影取景
http://whereyou.pixnet.net/blog/post/19970578-%E6%94%9D%E5%BD%B1%E5%8F%96%E6%99%AF
[15] 景別(Doc)
[16] 運鏡技巧基本解說
http://tw.myblog.yahoo.com/jianchen0627/article?mid=145
[17] 柳在天(2010), 就用構圖取勝吧!, 台北市:悅知文化
[18] 山口高志(2006), 構圖實例事典, 台北市:宇琉采伊娛樂經紀有限公司
[19] 佳影在線(2010), 成功構圖300例, 台北市:上奇資訊股份有限公司
[20] 莎頓國際學院
http://www.sheltoncollege.edu.sg/cn/aboutus-facilities.html
[21] 蘿莎會館
http:// hotel.wingnet.com.tw
[22] 涴莎室內樂團
http://www.classical-music.com.tw
[23] 公主郵輪的公主劇院
http://blog.liontravel.com/ckshsu/post/13365/4408/19906
[24] 何謂Fuzzy
http://ycc.tsu.edu.tw/9415/page/fuzzy.htm
[25] oracle technology network
http://www.oracle.com/technetwork/systems/fsm-156381.html
[26] Wikimedia-commons
http://www.commons.wikimedia.org
[27] MathWorks
http://www.mathworks.com/index.html
[28] Motion 4User Manual
http://documentation.apple.com/en/motion/