研究生: |
余晟麟 Yu, Cheng-Lin |
---|---|
論文名稱: |
基於特徵選取之通用影像分類器 A Generalized Image Classifier based on Feature Selection |
指導教授: |
李忠謀
Lee, Chung-Mou |
學位類別: |
碩士 Master |
系所名稱: |
資訊工程學系 Department of Computer Science and Information Engineering |
論文出版年: | 2015 |
畢業學年度: | 103 |
語文別: | 中文 |
論文頁數: | 49 |
中文關鍵詞: | 影像分類 、圖形識別 |
英文關鍵詞: | F-score |
論文種類: | 學術論文 |
相關次數: | 點閱:181 下載:20 |
分享至: |
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
傳統上建立一個分類系統,需要很複雜的程序,包含搜集訓練資料、特徵
擷取、辨識模式訓練及準確率分析等等。一般而言建立的分類系統只能針對特
定主題的影像資訊來做辨識,原因在於指定影像主題能夠利用該主題的特色資
訊做訓練,可使辨識系統達到良好的準確率。在過去的研究中,主要針對特定
主題提出影像分類的方法,本研究有別於一般性的影像分類研究在於不需指定
影像主題,就能建立一個影像辨識系統。
在實際的應用上,訓練資料蒐集不易,能提供訓練的樣本資料不多,本研 究對於辨識系統的設計為藉由少量的訓練樣本,擷取大量與不同種類的特徵, 使得辨識系統盡可能擁有足以表達各種不同影像主題的能力,並且使用 SVM 結 合 F-score 特徵選取的方法於影像分類領域中,從大量特徵中挑選一組滿足分類 任務所需要的通用特徵集合,以實現通用分類器,提供一個不需侷限影像主題 的分類應用。
Establishing an image classification system traditionally requires a series of complex procedures, including collecting training samples, feature extraction, training model and accuracy analysis. In general terms, the established image classification system should only be used to identify images of specific topics. The reason is that the system can apply the characteristics of knowledge within a specific image domain to train a model, which leads to higher accuracy. Most of the image classification methods of the earlier studies focus on specific domains, and the proposed method of the current research is otherwise that we do not specify the image domain in advance, while the image classification system can still be established.
Regarding the actual application, it is not easy to collect the training images, and therefore the provided training samples are insufficient. We have built an image classifier with a small number of training samples and extracted numerous features of every variety. By so doing, the classifier is equipped with the ability to present images of different topics. To create a general classifier that can function without the need to identify a certain image domain, SVM classifier and F-score feature selection method are combined, and within the field of image classification, a specific feature has been selected to satisfy facilitate the classification tasks.
[1] C. Chang and C. Lin, “LIBSVM: A Library for Support Vector Machines,” ACM Trans. Intell. Syst. Technol. 2011.
[2] Y. Chen and C. Lin, “Combining SVMs with Various Feature Selection Strategies,” Featur. Extr. Stud. Fuzziness Soft Comput., vol. 207, no. 1, pp. 315–324, 2006.
[3] K. Koutroumba, Pattern Recognition 4th Edition. pp. 4–7.
[4] D. G. Lowe, “Distinctive Image Features from Scale-Invariant Keypoints,” Int. J. Comput. Vis., 2004.
[5] Fei-Fei Li, Rob Fergus, Antonio Torralba “Recognizing and Learning Object Categories,” Int. J. Comput. Vis., 2009.
[6] U. Vidal Naquet, “Object Recognition with Informative Features and Linear Classification,” Comput. Vis., 2003.
[7] Curse of dimensionality. [Online]. Available: http://en.wikipedia.org/wiki/Curse_of_dimensionality.
[8] M. Dash and H. Liu, “Feature selection for classification,” Intell. Data Anal., 1997.
[9] Feature Selection. [Online]. Available: http://terms.naer.edu.tw/detail/1678987/.
[10] M. Dash, K. Choi, P. Scheuermann, and H. L. H. Liu, “Feature selection for clustering - a filter solution,” 2002 IEEE Int. Conf. Data Mining, 2002.
[11] J. R. Quinlan, “Discovering Rules From Large Collections of Examples: A Case Study,” Expert Syst. Microelectron. Age. Edinburgh Univ. Press., 1979.
[12] M. Robnik-Siknja and I. Kononeko, “Theoretical and empirical analysis of RelifF and RReliefF,” Mach Learn, 2003.
[13] Huan Liu, “Chi2: feature selection and discretization of numeric attributes,” in Tools with Artificial Intelligence, 1995.
[14] P. Pudil, “Floating search methods in feature selection,” Pattern Recognit. Lett., pp. 1119–1125, 1993.
[15] I.-S. Oh, J.-S. Lee, and B.-R. Moon, “Hybrid genetic algorithms for feature selection.,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 26, no. 11, pp. 1424– 1437, 2004.
[16] P. L. Iñaki Inza1, Basilio Sierra1, Rosa Blanco1, “Gene selection by sequential search wrapper approaches in microarray cancer class prediction,” J. Intell. Fuzzy Syst., vol. 12, 2002.
[17] F. De Inform and B. A. Draper, “Feature Selection from Huge Feature Sets,” Comput. Vis., 2001.
[18] Y. Chang and C. Lin, “Feature Ranking Using Linear SVM,” pp. 53–64.
[19] S. Das, “Filters, wrappers and a boosting-based hybrid for feature selection,” in ICML ’01 Proc. of the Eighteenth International Conference on Machine Learning, 2001, pp. 74–81.
[20] A. Y. Ng, “On Feature selection: Learning with Exponentially many Irrelevant Features Training Examples,” Proc. 15th Interntional Conf. Mach. Learn., pp. 404–412, 1998.
[21] G. McLachlan, Discriminant Analysis and Statistical Pattern Recognition. Wiley. 1992.
[22] H. Bay, T. Tuytelaars, and L. Van Gool, “SURF: Speeded up robust features,” Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 3951 LNCS, pp. 404–417, 2006.
[23] M. Calonder, V. Lepetit, M. Ozuysal, T. Trzcinski, C. Strecha, and P. Fua, “ BRIEF: Binary Robust Independent Elementary Features, ” IEEE Trans. Pattern Anal and Mach. Intell, 2012.
[24] E. Rublee and G. Bradski, “ORB: an efficient alternative to SIFT or SURF,” Comput. Vis. (ICCV), 2011.
[25] I. Guyon, J. Weston, S. Barnhill, and V. Vapnik. “Gene selection for cancer classication using support vector machines,” Machine Learning, 2002.
[26] NEC Animal Dataset. [Online]. Available: http://ml.nec-labs.com/download/data/videoembed.
[27] Leedsbutterfly dataset. [Online]. Available: http://www.comp.leeds.ac.uk/scs6jwks/dataset/leedsbutterfly/.
[28] Flower Image Dataset. [Online]. Available: http://www.robots.ox.ac.uk/~ vgg/data/flowers.
[29] A. Vedaldi, V. Gulshan, M. Varma, and A. Zisserman, “Multiple Kernels for object detection,” Proc. IEEE Int. Conf. Comput. Vis., pp. 606–613, 2009
[30] Caltech-101 Dataset. [Online]. Available: http://www.vision.caltech.edu/Image_Datasets/Caltech101/.
[31] Experiments on Caltech-101. [Online]. Available: http://www.robots.ox.ac.uk/~vgg/software/MKL/.