簡易檢索 / 詳目顯示

研究生: 蘇靖婷
Su, Jing-Ting
論文名稱: 基於特徵點及內點選擇優化之網格變形影像拼接品質提升
Enhancing Image Stitching Quality via Optimized Feature Point and Inlier Selection with Mesh Deformation
指導教授: 樂美亨
Yueh, Mei-Heng
口試委員: 樂美亨
Yueh, Mei-Heng
黃聰明
Huang, Tsung-Ming
郭岳承
Kuo, Yueh-Cheng
口試日期: 2024/06/18
學位類別: 碩士
Master
系所名稱: 數學系
Department of Mathematics
論文出版年: 2024
畢業學年度: 112
語文別: 英文
論文頁數: 45
中文關鍵詞: SIFTPCA-SIFTLO-RANSACAPAPAANAPPoisson圖像混合
英文關鍵詞: SIFT, PCA-SIFT, LO-RANSAC, APAP, AANAP, Poisson Blending
DOI URL: http://doi.org/10.6345/NTNU202401007
論文種類: 學術論文
相關次數: 點閱:43下載:9
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 隨著電腦視覺技術的進步發展,影像處理也扮演了一個極其重要的角色。影像拼接的技術不僅提升了視覺效果,也在無人機航拍、衛星影像處理、地理信息系統(GIS)、醫療影像診斷、安全監控系統以及虛擬實境等領域中廣泛的應用。影像拼接使我們能夠在視野有所局限下拓展視野範圍,將多個視角的影像合成一個無縫的全景影像,但如果待拼接的圖像為具有視差或是背景複雜的圖片,則將面臨一些問題,此時若僅使用投影變換進行影像配準容易導致影像的拼接錯誤或是出現鬼影、模糊等狀況。因此如何更精準地處理影像間的對齊、對影像中的變化(旋轉、光照、尺度變化等等)有一定的穩定性、有效減少拼接過程中的失真,以及如何實現實時影像拼接等等讓拼接後的結果優化且更有效率,是希望邁進的方向。
    本研究關注於提高影像拼接的精準度及品質同時減少影像的失真並盡可能降低計算成本,透過比較幾種不同的的特徵點選取方法、優化的變換估計與基於網格變形的圖像扭曲並搭配線性混合或是Poisson圖像混和來完成圖像拼接,主要以兩張影像的拼接為例子並利用單邊變換投影誤差以及NIQE和BRISQUE作為影像準確度及品質評估依據進行拼接的結果進行比較,最終整理出一種相對較優的的影像拼接方法。

    With the advancement of computer vision technology, image processing has also played an extremely important role. Image stitching not only enhances visual effects but is also widely applied in areas such as drone aerial photography, satellite imagery processing, Geographic Information Systems ( GIS ), medical image diagnosis, security monitoring systems, and virtual reality. It allows us to expand our field of view despite visibility limitations, merging images from multiple perspectives into a seamless panoramic image. However, if the images to be stitched have parallax or come from complex backgrounds, several problems may arise. In such cases, using only projective transformations for image registration can easily lead to stitching errors or the appearance of ghosting and blurriness. Therefore, how to more precisely align between images, exhibit certain stability to changes in the image ( rotation, illumination, scale, etc. ), effectively reduce distortion during the stitching process, and achieve real-time imagestitching to optimize the results more efficiently, are directions we hope to advance in.
    This thesis aims to enhance the precision and quality of image stitching while reducing distortion and minimizing computational costs as much as possible. By utilizing a variety of feature point detection methodsoptimized transformation estimation and grid-based image warping, coupled with linear blending or Poisson image blending. Primarily centered on stitching two images as examples, the study employs transfer error, NIQE and BRISQUE as benchmarks for evaluating the accuracy and quality of the stitching results. It compares stitching outcomes to identify an optimal stitching approach.

    1 Introduction 1 1.1 Motivation and Objectives 1 1.2 Related Work 2 1.3 Outline 3 2 Image Registration 4 2.1 Feature detection 4 2.1.1 The Scale-Invariant Feature Transform 4 2.1.2 Speeded Up Robust Features 10 2.1.3 PCA-SIFT 11 2.2 Feature Matching 11 2.3 Transformation Estimation 12 2.3.1 Projective Transformation 12 2.3.2 Random Sample Consensus 14 2.3.3 Locally Optimized RANSAC 15 3 Image Warping and Blending 16 3.1 Image Warping 17 3.1.1 As-Projective-As-Possible Image Stitching with Moving DLT 17 3.1.2 Adaptive As-Natural-As-Possible Image Stitching 19 3.2 Seamless Blending 22 3.2.1 Linear Blending 22 3.2.2 Poisson Blending 22 4 Experimental Methods & Results 24 4.1 Experimental Procedure 24 4.1.1 Image Preprocessing 26 4.1.2 Numerical Experiments 28 4.2 Image Quality Assessment 31 4.2.1 NIQE 31 4.2.2 BRISQUE 32 4.3 Image Stitching Results 34 4.3.1 Experimental Stitching Results 35 5 Conclusion & Future Work 40 References 42

    [1] K. G. Derpanis, “The harris corner detector,” York University, vol. 2,no. 1, p. 2, 2004.
    [2] D. G. Lowe, “Distinctive image features from scale-invariant keypoints,International journal of computer vision, vol. 60, pp. 91–110, 2004.
    [3] Y. Ke and R. Sukthankar, “Pca-sift: A more distinctive representation for local image descriptors,” in Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2004.CVPR 2004., vol. 2. IEEE, 2004, pp. II–II.
    [4] H. Bay, T. Tuytelaars, and L. Van Gool, “Surf: Speeded up robust features,” in Computer Vision–ECCV 2006: 9th European Conference on Computer Vision, Graz, Austria, May 7-13, 2006. Proceedings, Part I 9.Springer, 2006, pp. 404–417.
    [5] J. Wu, Z. Cui, V. S. Sheng, P. Zhao, D. Su, and S. Gong, “A comparative study of sift and its variants,” Measurement science review, vol. 13,no. 3, pp. 122–131, 2013.
    [6] M. A. Fischler and R. C. Bolles, “Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography,” Communications of the ACM, vol. 24, no. 6, pp.381–395, 1981.
    [7] P. H. Torr and A. Zisserman, “Mlesac: A new robust estimator with application to estimating image geometry,” Computer vision and image understanding, vol. 78, no. 1, pp. 138–156, 2000.
    [8] O. Chum, J. Matas, and J. Kittler, “Locally optimized ransac,” in Pattern Recognition: 25th DAGM Symposium, Magdeburg, Germany, September 10-12, 2003. Proceedings 25. Springer, 2003, pp. 236–243.
    [9] O. Chum, J. Matas, and S. Obdrzalek, “Enhancing ransac by generalized model optimization,” in Proc. of the ACCV, vol. 2, 2004, pp. 812–817.
    [10] O. Chum and J. Matas, “Matching with prosac-progressive sample consensus,” in 2005 IEEE computer society conference on computer vision and pattern recognition (CVPR’05), vol. 1. IEEE, 2005, pp. 220–226.
    [11] J. Ma, J. Zhao, J. Jiang, H. Zhou, and X. Guo, “Locality preserving matching,” International Journal of Computer Vision, vol. 127, pp. 512–531, 2019.
    [12] J. Zaragoza, T.-J. Chin, M. S. Brown, and D. Suter, “As-projective-as-possible image stitching with moving dlt,” in Proceedings of the IEEEconference on computer vision and pattern recognition, 2013, pp. 2339–2346.
    [13] C.-C. Lin, S. U. Pankanti, K. Natesan Ramamurthy, and A. Y. Aravkin,Adaptive as-natural-as-possible image stitching,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015, pp. 1155–1163.
    [14] P. Pérez, M. Gangnet, and A. Blake, “Poisson image editing,” in Seminal Graphics Papers: Pushing the Boundaries, Volume 2, 2023, pp. 577–582.
    [15] M. Brown and D. G. Lowe, “Automatic panoramic image stitching using invariant features,” International journal of computer vision, vol. 74, pp.59–73, 2007.
    [16] T. Lindeberg, “Scale-space theory: A basic tool for analyzing structures at different scales,” Journal of applied statistics, vol. 21, no. 1-2, pp.225–270, 1994.
    [17] P. C. Madhusudana and R. Soundararajan, “Subjective and objective quality assessment of stitched images for virtual reality,” IEEE Transactions on Image Processing, vol. 28, no. 11, pp. 5620–5635, 2019.
    [18] T. Mertens, J. Kautz, and F. Van Reeth, “Exposure fusion,” in 15th Pacific Conference on Computer Graphics and Applications (PG’07).IEEE, 2007, pp. 382–390.
    [19] L. Yuan and J. Sun, “Automatic exposure correction of consumer photographs,” in Computer Vision–ECCV 2012: 12th European Conference on Computer Vision, Florence, Italy, October 7-13, 2012, Proceedings,Part IV 12. Springer, 2012, pp. 771–785.
    [20] M. Kaur, J. Kaur, and J. Kaur, “Survey of contrast enhancement techniques based on histogram equalization,” International Journal of Advanced Computer Science and Applications, vol. 2, no. 7, 2011.
    [21] N. Limare, J.-L. Lisani, J.-M. Morel, A. B. Petro, and C. Sbert, “Simplest Color Balance,” Image Processing On Line, vol. 1, pp. 297–315,2011, https://doi.org/10.5201/ipol.2011.llmps-scb.
    [22] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE transactions on image processing, vol. 13, no. 4, pp. 600–612, 2004.
    [23] A. Mittal, R. Soundararajan, and A. C. Bovik, “Making a “completely blind” image quality analyzer,” IEEE Signal processing letters, vol. 20,no. 3, pp. 209–212, 2012.
    [24] A. Mittal, A. K. Moorthy, and A. C. Bovik, “No-reference image quality assessment in the spatial domain,” IEEE Transactions on image processing, vol. 21, no. 12, pp. 4695–4708, 2012.
    [25] B. Appina, S. Khan, and S. S. Channappayya, “No-reference stereoscopic image quality assessment using natural scene statistics,” Signal Processing: Image Communication, vol. 43, pp. 1–14, 2016.
    [26] M. A. Saad, A. C. Bovik, and C. Charrier, “Blind image quality assessment: A natural scene statistics approach in the dct domain,” IEEE transactions on Image Processing, vol. 21, no. 8, pp. 3339–3352, 2012.

    下載圖示
    QR CODE