研究生: |
林君儒 Lin, Chun-Ju |
---|---|
論文名稱: |
基於卷積神經網路的電影海報概念分析 Concept Analysis in Movie Posters via Convolutional Neural Networks |
指導教授: |
葉梅珍
Yeh, Mei-Chen |
學位類別: |
碩士 Master |
系所名稱: |
資訊工程學系 Department of Computer Science and Information Engineering |
論文出版年: | 2017 |
畢業學年度: | 105 |
語文別: | 中文 |
論文頁數: | 38 |
中文關鍵詞: | 電影海報 、多媒體內容分析 、卷積神經網路 、情緒 |
英文關鍵詞: | Movie poster, Multimedia Content Analysis, Convolutional Neural Network, Emotion |
DOI URL: | https://doi.org/10.6345/NTNU202202692 |
論文種類: | 學術論文 |
相關次數: | 點閱:140 下載:23 |
分享至: |
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
近年來現代人擁有多樣化的休閒娛樂方式,觀賞電影依舊是許多人的首要選擇,電影海報則扮演電影宣傳方式的重要角色,其設計者會以視覺上多樣的元素製作出符合影片風格、概念且具有吸引力的畫面,而這些視覺上的設計元素會與電影息息相關。人們能夠透過視覺輕易地從海報感受出電影的概念,而這些在海報中讓我們可以依據視覺感官接收到的電影海報概念可能有些甚麼?本論文假設電影海報設計的模式與電影的類型有相當程度的關係,在相似的電影類型中,會使用相同的電影海報設計元素。我們從IMP Awards網站上收集了近十年(2006─2015年)的電影海報作為研究的資料集,並從IMDb網站上取得各部電影的類型資訊及關鍵字。我們利用對於圖像辨識有優秀結果的卷積神經網路(Convolutional Neural Network)技術來取出電影海報中的特徵,並以電影關鍵字和情緒視為電影海報概念來分析其記錄大量影像特徵的神經元是否會與其之間有關聯性存在。在本論文的實驗結果發現,利用卷積神經網路對電影海報作電影類型之多標籤分類有良好的分類結果,而且Fc7層取出的特徵向量維度並不影響分類之效能。然而,以電影關鍵字和情緒視為電影海報概念之分析的部分,實驗顯示以本論文的方式進行分析,其與神經元的值之間的關聯性不明顯。
In recent years, people have a variety of entertainments; however, watching movies is still the primary choice of many people. Movie posters are playing an important role in advertising a film. People can easily capture the concepts of a poster based on the visual cues it reveals. But, what exactly are the concepts? In this paper, we assume that the design of a movie poster is related to the movie genre; in other words, movies of the same genre may use a similar style in designing the movie posters. We collect movie posters from the IMP Awards website released during 2006 to 2015 as a study case and obtain the genres and keywords of each movie from the IMDb website. We use the Convolutional Neural Network as the main analysis technique, which has shown excellent performances on image recognition, to extract the features (neuron values) of a movie poster. Finally, we analyze the correlation between neuron values and keywords (and emotions), which are considered concepts a movie poster may have. Our study shows that using Convolutional Neural Network for classifying movie posters has a great performance, and the dimension of the Fc7 layer doesn’t affect the classification effectiveness. However, the correlation between neuron values and keywords (and emotions) is not obvious using the analysis approaches proposed in this thesis.
[1]. Les Sibères Affiches de Christophe Courtois, http://afficheschristophecourtois.blogspot.tw/
[2]. Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, T. Darrell. Caffe: Convolutional architecture for fast feature embedding. In arXiv preprint arXiv:1408.5093, 2014.
[3]. P. Luo, Z. Zhu, Z. Liu, X. Wang, X. Tang. Face model compression by distilling knowledge from neurons. In AAAI Conference on Artificial Intelligence, 2016.
[4]. Z. Liu, P. Luo, X. Wang, X. Tang. Deep learning face attributes in the wild. In International Conference on Computer Vision (ICCV), 2015.
[5]. Y. Sun, X. Wang, X. Tang. Deeply learned face representations are sparse, selective, and robust. In arXiv preprint arXiv:1412.1265, 2014.
[6]. M. Ivašić-Kos, M. Pobar, L. Mikec. Movie posters classification into genres based on low-level features. In International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO), pp. 1148–1153, 2014.
[7]. A. Krizhevsky, I. Sutskever, G. E. Hinton. ImageNet classification with deep convolutional neural networks. In International Conference on Neural Information Processing Systems (NIPS), 2012.
[8]. J. Deng, W. Dong, R. Socher, L. J. Li, K. Li, F. F. Li. ImageNet: A large-scale hierarchical image database. In International Conference on Computer Vision and Pattern Recognition (CVPR), pp. 248–255, 2009.
[9]. S. Shankar, V. K. Garg, R. Cipolla. Deep-carving: Discovering visual attributes by carving deep neural nets. In International Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3403-3412, 2015.
[10]. J. Wang, Y. Yang, J. Mao, Z. Huang, C. Huang, W. Xu. Cnn-rnn: A unified framework for multi-label image classification. In arXiv preprint arXiv:1604.04573, 2016.
[11]. T. Mikolov, K. Chen, G. Corrado, and J. Dean. Efficient estimation of word representations in vector space. In International Conference on Learning Representations (ICLR), 2013.
[12]. B. J. Frey, D. Dueck. Clustering by passing messages between data points. Science, vol. 315, pp. 972-976, 2007.
[13]. Z. Ren, H. Jin, Z. Lin, C. Fang, A. Yuille. Multi-instance visual-semantic embedding. In arXiv preprint arXiv:1512.06963, 2015.
[14]. D. Borth, R. Ji, T. Chen, T. Breuel, S. F. Chang. Large-scale visual sentiment ontology and detectors using adjective noun pairs. In ACM Conference on Multimedia (MM), 2013.
[15]. T. Chen, D. Borth, T. Darrell, S. F. Chang. DeepSentiBank: Visual sentiment concept classification with deep convolutional neural networks. In arXiv preprint arXiv:1410.8586, 2014.
[16]. B. Jou, T. Chen, N. Pappas, M. Redi, M. Topkara, S. F. Chang. Visual affect around the world: A large-scale multilingual visual sentiment ontology. In ACM Conference on Multimedia (MM), 2015.
[17]. R. Plutchik. Emotion: A psychoevolutionary synthesis. Harper & Row, Publishers, 1980.