簡易檢索 / 詳目顯示

研究生: 許峻瑋
Hsu, Chiung-Wei
論文名稱: 基於RANSAC篩選之書籍封面辨識研究
Using RANSAC for Book Cover Recognition
指導教授: 李忠謀
Lee, Chung-Mou
學位類別: 碩士
Master
系所名稱: 資訊工程學系
Department of Computer Science and Information Engineering
論文出版年: 2016
畢業學年度: 104
語文別: 中文
論文頁數: 44
中文關鍵詞: SURFRANSAC特徵擷取圖片辨識
英文關鍵詞: SURF, RANSAC, feature selection, image matching
DOI URL: https://doi.org/10.6345/NTNU202205047
論文種類: 學術論文
相關次數: 點閱:71下載:17
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 本研究提出一個使用手機鏡頭拍攝的書籍封面影像做為辨識對象,透過比對不同於資料庫內的一定範圍角度之書籍封面影像,建構一個根據書籍封面來進行書籍辨識的方法,取代現行使用OCR來辨識書籍的方法。本研究提出的方法是採用SURF特徵點偵測建立資料庫中書籍封面的特徵點資訊後,使用KNN做特徵點的匹配,再透過本研究利用書本特性提出一個根據RANSAC所改良的特徵點篩選方法來改善RANSAC在篩選特徵點匹配的盲點。最後配合本研究基於RANSAC的改良法,提出一個根據Miksik的重複率的判別方法來做為準確率的依據。
    本研究將實驗分成兩部分,實驗的第一部份,是影像縮小對於實驗的速度成效驗證和基於RANSAC改良法所制定的閾值範圍實驗;實驗的第二部分,則是測試本研究提出之方法在實際場景的準確度,在使用者將書本拿起進行拍攝的情況下,驗證本研究提出之方法和實驗第一部分中閾值對於本研究方法的準確率影響程度。

    This paper provide a way for book recognition by book cover image matching instead of OCR. First, we apply matching process base on SURF, then using nearest neighbor and RANSAC to select the matched candidates. In order to improve the result of matched candidates, we develop a selected method which improves RANSAC's blind spot in matching the feature candidates.
    The experiment has two parts; the first part try to find out the effect of input image resizing and the threshold for our own method. The second part is testing the accuracy in experimental environment and reality. We find out that our method's accuracy were achieved about 93% and the accuracy could endured the image rotation up to 15 degrees.

    目錄 圖目錄 i 表目錄 iv 第一章前言 1 1.1研究動機 1 1.2 研究目的 2 1.3 研究範圍與限制 2 1.4 論文架構 2 第二章文獻探討 3 2.1 特徵點取得方式 3 2.1.1尺度不變特徵轉換(SIFT) 3 2.1.2加速穩健特徵(SURF) 5 2.1.3 FAST和其相關的改良方法 5 2.1.4 特徵點匹配 8 2.2 透視變換 8 2.3特徵點匹配的評估探討 10 第三章研究方法 12 3.1 系統架構 12 3.2前處理 14 3.3加速穩健特徵(SURF) 14 3.3.1積分影像(Integral Image) 14 3.3.2基於積分影像的海森矩陣(Hessian Matrix) 15 3.3.3SURF特徵點和其特徵點描述單元 16 3.4特徵點匹配和篩選 20 3.4.1最接近距離法(NN)和RANSAC 21 3.4.2N2Area篩選和RANSAC殘留率(RANSAC Remain Rate) 22 3.4.3 基於重複率(Repeatability score)的評估準則 24 第四章實驗 26 4.1開發環境與實驗資料庫 26 4.1.1 資料庫和實驗資料來源 26 4.2實驗設計與評估方法 30 4.2.1實驗一 31 4.2.2實驗二 33 4.2.3實驗三 37 4.2.4實驗四 38 4.3 實驗總結 40 第五章結論 41 參考文獻 42

    [1]D.G. Lowe, "Distinctive image features from scale-invariant keypoints", In: International journal of computer vision, 60.2(2002), 91-110.
    [2]H. Bay, T.Tuytelaars, and L. Van Gool, "Surf: Speeded up robust features" , In: Computer Vision-ECCV 2006.Springer,2006,404-417.
    [3]E. Rosten andT. Drummond, "Machine learning for high-speed corner detection", In: Computer Vision-ECCV 2006.Springer,2006,430-443.
    [4]E. Mair, G. D. hager, D. Burschka, M. Suppa, and G. Hirzinger, "Adaptive and generic corner detection based on the accelerated segment test", In: Computer Vision-ECCV 2010.Springer,2010,183-196.
    [5]E. Rosten, R. Porter, and T. Drummond, "Faster and better: A machine learning approach to corner detection", In: Pattern Analysis and Machine Intelligence, IEEE Transactions on32.1(2010), 105-119.
    [6]S. Leutenegger, M. Chli, and R. Y. Siegwart, "BRISK: Binary robust invariant scalable keypoints", In: Computer Vision(ICCV), 2011 IEEE International Conference on.IEEE,2011, 2548-2555.
    [7]M. Calonder, V. Lepetit, M. Ozuysal, T. Trzcinski, C. Strecha, and P. Fua, "BRIEF: Computing a local binary descriptor very fast", In:Pattern Analysis and Machine Intelligence, IEEE Transactions on34.7(2012), 1281-1298.
    [8]E. Rublee, V. Rabaud, K. Konolige, and G. Bradski, "ORB: an efficient alternative to SIFT or SURF",In:In: Computer Vision(ICCV), 2011 IEEE International Conference on.IEEE,2011, 2564-2571.
    [9]C. Harris and M. Stephens, "A combined corner and edge detector",In: Alvey vision conference.Vol. 15. Manchester, UK, 1988, p.50.
    [10]E. Tola, V. Lepetit, and P. Fua, "Daisy: An efficient dense descriptor applied to wide-baseline stereo", In:Pattern Analysis and Machine Intelligence, IEEE Transactions on32.5(2010), 815-830.
    [11]M. A. Fischler and R. C. Bolles, "Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography", In: Communications of the ACM24.6(1981),381-395.
    [12]P. Heckbert. Fundamentals of Texture Mapping and Image Warping. Master’s thesis, University of California at Berkeley, Department ofElectrical Engineering and Computer Science, June 17 1989.
    [13]A. Criminisi, I. Reid, and A. Zisserman, "A plane measuring device", In: Image and Vision Computing, vol17, Issue 8, June 1999, 625–634.
    [14]"Parameter values for the HDTV standards for production and international programme exchange," ed: ITU-R Rec. BT. 709-5, 2002.
    [15]P.A. Viola and M.J. Jones," Rapid object detection using a boosted cascade of simple features", In: CVPR, issue 1, 2001, pp. 511–518.
    [16]D. Lowe, "Object recognition from local scale-invariant features", In: Computer Vision(ICCV), 1999 IEEE International Conference on.IEEE,1999, 1150-1157.
    [17]A. Neubeck and L. Van Gool, "Efficient non-maximum suppression", In: ICPR, 2006.
    [18]M. Brown and D. Lowe, "Invariant features from interest point groups", In: BMVC, 2002.
    [19]O. Miksik and K. Mikolajczyk, "Evaluation of local detectors and descriptors for fast feature matching", In: Pattern Recognition(ICPR), 2012 21st International Conference on. IEEE, 2012, 2681-2684.
    [20] T. Lindeberg, "Scale-space for discrete signals",In: Pttern Analysis and Machine Intelligence,IEEE Transactions on 234-254, 1990.
    [21]Open Library. URL:https://openlibrary.org/lists
    [22]Cover Browser. URL:http://www.coverbrowser.com/
    [23]Amazon. URL: http://www.amazon.com
    [24]Q. Fan, V. Lepetit, and P. Fua, "Daisy: An efficient dense descriptor applied to wide-baseline stereo", In:Pattern Analysis and Machine Intelligence, IEEE Transactions on 32.5(2010), 815-830.
    [25]K. Mikolajczyk and C. Schmid. "A performance evaluation of local descriptors". In: Pttern Analysis and Machine Intelligence, IEEE Transactions on 27.10(2005), 1615-1630
    [26]L. Juan and O. Gwun. "A comparison of sift, pca-sift and surf". In: International Journal of Image Processing(IJIP) 3.4(2009), 143-152

    下載圖示
    QR CODE