簡易檢索 / 詳目顯示

研究生: 李博衡
Li, Bo-Heng
論文名稱: 虛擬數據生成於零樣本學習
Generating virtual data for zero-shot learning
指導教授: 葉梅珍
Yeh, Mei-Chen
口試委員: 陳祝嵩 彭彥璁 葉梅珍
口試日期: 2021/07/30
學位類別: 碩士
Master
系所名稱: 資訊工程學系
Department of Computer Science and Information Engineering
論文出版年: 2021
畢業學年度: 109
語文別: 中文
論文頁數: 25
中文關鍵詞: 零樣本學習
英文關鍵詞: zero-shot learning
研究方法: 實驗設計法
DOI URL: http://doi.org/10.6345/NTNU202101196
論文種類: 學術論文
相關次數: 點閱:126下載:19
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 零樣本學習(zero shot learning)是指透過從已知類別的訓練資料中,將學習到的知識遷移到能辨識未知類別上。Li的方法[22]透過語義嵌入和視覺特徵當作訓練資料,用深度學習模型將影像特徵轉變為語義分類器,並與語義嵌入做線性/非線性組合,藉此來找出與該視覺特徵相符的類別。

    然而僅透過已知類別的資料當作訓練樣本,預測的成效可能有限,因為模型完全沒有對於未知類別的資訊。為了解決這個問題,本論文提出藉由隨機取樣的方式從已知類別的樣本中,產生出的新樣本,將其當作模擬未知類別的訓練樣本,也就是虛擬資料。讓模型在學習的時候,可以模擬未知類別的存在,使分類準確率得以提升。實驗於多個公開標準資料集驗證了所提出方法的可行性。

    Zero-shot learning refers to the knowledge transfer to recognize unseen classes, with a model learned from seen classes. Li’s method [22] uses a deep learning method to transform visual embeddings into semantic classifiers, and performs linear / non-linear classification on semantic embeddings to predict the class.

    Nevertheless, solely using data of seen classes may have a limited prediction performance because the model does not have any information of unseen classes. In order to solve this problem, we propose to generate virtual training data by randomly combining seen classes, which simulate unseen data. With such virtual data, the model can simulate the situation of recognizing unseen classes during learning, and therefore the classification accuracy can be improved. Experiments on several benchmark datasets have verified the effectiveness of the proposed method.

    第一章 簡介 1 1.1 研究背景 1 1.2 研究動機 3 1.3 論文架構 3 第二章 相關工作 4 2.1 語義嵌入 4 2.2 零樣本學習 5 第三章 方法與步驟 7 3.1 問題定義 7 3.2 虛擬資料建置 7 3.3 模型架構 8 第四章 實驗 11 4.1 資料集 11 4.2 實作細節 12 4.3 評估方式 13 4.4 實驗一: 數據比較 14 4.5 實驗二: Ablation Study 16 第五章 結論 21 參考著作 22

    [1] ZongYan Han, Zhenyong Fu, Jian Yang,“Learning the Redundancy-free Feature for Generalized Zero-Shot Object Recognition”,in CVPR,2020.
    [2] Jiamin Wu, Tianzhu Zhang, Zheng-Jun Zha, Jiebo Luo, YongDong Zhang, Feng Wu,“Self-supervised Domain-aware Generative Network for Generalized Zero-shot Learning”,in CVPR,2020.
    [3] Yu-Ying Chou, Hsuan-Tien Lin, Tyng-Luh Liu,“ADAPTIVE AND GENERATIVE ZERO-SHOT LEARNING”, in ICLR,2021.
    [4] Dat Huynh, Ehsan Elhamifar,“Fine-Grained Generalized Zero-Shot Learning via Dense Attribute-Based Attention”, in CVPR,2020.
    [5] WEI WANG, VINCENT W.ZHENG, HAN YU, CHUNYAN MIAO,“A Survey of Zero-Shot Learning: Settings, Methods, and Applications”,in ACM Transactions on Intelligent Systems and Technology(TIST),2019.
    [6] Tomas Mikolov, Kai Chen, Greg Corrado, Jeffrey Dean,“Efficient Estimation of Word Representations in Vector Space”,in ICLR,2013.
    [7] Hongyi Zhang, Moustapha Cisse, Yann N Dauphin, David Lopez-Paz, “ mixup: Beyond empirical risk minimization”,in International Conference on Learning Representations, 2018.
    [8] 李方, “基於深度視覺語義嵌入之零樣本學習” , 2019.
    [9] Y.Xian, B.Schiele, Z.Akata,“Zero-shot learning – the good,the bad and the ugly”,in CVPR,2017.
    [10] Yanwei Fu and Leonid Sigl,“Semi-supervised Vocabulary-informed Learning”,in CVPR,2016.
    [11] Jie Song , Chengchao Shen , Yezhou Yang , Yang Liu , and Mingli Song,“Transductive Unbiased Embedding for Zero-Shot Learning”,in CVPR,2018.
    [12] W.L.Chao, S.Changpinyo, B.Gong, F.Sha,“An Empirical Study and Analysis of Generalized Zero-Shot Learning for Object Recognition in the Wild”,in ECCV,2016.
    [13] Long Chen, Hanwang Zhang, Jun Xiao, Wei Liu, Shih-Fu Chang,“Zero-shot visual recognition using semantics-preserving adversarial embedding networks”,in CVPR,2018.
    [14] Yashas Annadani and Soma Biswas,“Preserving semantic relations for zero-shot learning”,in CVPR,2018.
    [15] Huajie Jiang, Ruiping Wang, Shiguang Shan, and Xilin Chen,“Learning class prototypes via structure alignment for zero-shot recognition”,in ECCV,2018.
    [16] Jin Li, Xuguang Lan, Yang Liu, Le Wang, and Nanning Zheng,“Compressing unknown images with product quantizer for efficient zero-shot classification”,in CVPR,2019.
    [17] M. Norouzi , T. Mikolov, S. Bengio, Y. Singer, J. Shlens, A. Frome, G. S. Corrado, J. Dean,“Zero-Shot Learning by Convex Combination of Semantic Embeddings”,in ICLR,2014.
    [18] A. Frome, G. S. Corrado, J. Shlens, S. Bengio, J. Dean, Marc’Aurelio Ranzato, T. Mikolov,“DeViSE: A Deep Visual-Semantic Embedding Model”,in NIPS,2013.
    [19] Flood Sung, Yongxin Yang, Li Zhang, Tao Xiang, Philip HS Torr, and Timothy M Hospedales,“Learning to compare: Relation network for few-shot learning”,in CVPR,2018.
    [20] Yang Liu, Jishun Guo, Deng Cai, and Xiaofei He,“ Attribute attention for semantic disambiguation in zero-shot learning”,in ICCV,2019.
    [21] Maunil R. Vyas, Hemanth Venkateswara, and Sethuraman Panchanathan,“Leveraging seen and unseen semantic relationships for generative zero-shot learning”,in ECCV,2020.
    [22] Fang Li, Mei-Chen Yeh,“Generalized zero-shot recognition through image-guided semantic classification”,in ICIP,2021.
    [23] P.Welinder, S.Branson, T.Mita, C.Wah, F.Schroff, S.Belongie, and P.Perona,“Caltech-UCSD Birds 200”,in Caltech, Tech. Rep. CNS-TR-2010-001,2010.
    [24] G.Patterson and J.Hays,“Sun attribute database: Discovering, annotating, and recognizing scene attributes”,in CVPR,2012.
    [25] A.Farhadi, I.Endres, D.Hoiem, and D.A.Forsyth,“Describing Objects by their Attributes”,in CVPR,2009.

    下載圖示
    QR CODE