簡易檢索 / 詳目顯示

研究生: 高梵
Kao, Fan
論文名稱: 基於標記物擴增實境下的遮擋處理
Occlusion handling on marker-based augmented reality
指導教授: 李忠謀
Lee, Chung-Mou
學位類別: 碩士
Master
系所名稱: 資訊工程學系
Department of Computer Science and Information Engineering
論文出版年: 2018
畢業學年度: 106
語文別: 中文
論文頁數: 35
中文關鍵詞: 擴增實境遮擋問題深度影像影像處理
英文關鍵詞: Augmented Reality, Occlusion Problem, Depth Image, Image Process
DOI URL: http://doi.org/10.6345/THE.NTNU.DCSIE.031.2018.B02
論文種類: 學術論文
相關次數: 點閱:84下載:18
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 本研究提出了基於標記物的遮檔處理演算法,藉由深度攝影機取 得真實環境的幾何資訊,以達成對虛擬物件的遮檔處理。現有的遮擋 處理研究中,少有針對處理時間的優化,但在擴增實境應用中,每秒 幀數(Frames per Second,FPS)至少需到達 30FPS,才能提供良好的 使用體驗,因此遮檔處理除了考量畫面的精細程度外,也要考慮處理 時間。本研究提出了針對處理速度優化的演算法,動態地計算需處理 的範圍,並盡量減少需運算的像素數。實驗的部分可以得知,藉由動 態遮檔處理範圍,遮檔處理的運算時間降至原本的四分之一,最終在 RealSense SR 300 上可以達成每秒三十幀的速度,與此相機原生的每秒 幀數相同。
    由於本研究無針對原始深度影像進行優化,為確保其可用性,亦測 量原始深度影像中雜訊像素的寬度,從實驗結果得知,在物體距離鏡 頭 50 至 70 公分時,雜訊的像素寬度從 30pixels 逐漸降低至 10pixels, 證明在一定距離使用時,雜訊並不會嚴重地影響遮檔處理的結果。

    This paper presents a occlusion handling algorithm on marker-based aug- mented reality system, which obtains the geometric information of real envi- ronment by depth camera to achieve occlusion handling of the virtual object. In the existing occlusion handling research, there is less attention to process- ing time, but in augmented reality applications, frames per sescnod(FPS) need to reach at least 30 FPS to provide a good experience.
    This paper proposes an algorithm for processing speed o ㄣ timization, which sets range of processing dynamically.According to experiments, the processing time of occlusion processing is reduced by three quarters, and reach 30 FPS as well as FPS of RealSense SR300.
    This paper doesn’t refine raw depth image,To ensure its usability, the width of noise pixels in raw depth images is measured,according to the exper- imental results, when the object is 50 to 70 cm away from the lens, the pixel width of the noise is gradually reduced from 30 pixels to 10 pixels, which proves that the noise does not seriously affect the result of the occlusion pro- cessing when used at a certain distance.

    誌謝 i 摘要 ii Abstract iii 1 緒論 1 1.1 研究動機................................ 1 1.2 研究目的................................ 2 1.3 研究範圍與限制........................... 3 1.4 論文章節架構 ............................ 4 2 文獻探討 5 2.1 遮擋處理................................. 5 2.1.1 基於二維輪廓的遮擋處理 ................. 5 2.1.2 基於三維輪廓的遮擋處理 ................. 7 2.1.3 基於深度資訊的遮擋處理 ................. 8 2.2 基於標記物的擴增實境系統 ................. 8 2.2.1 ARToolKit ............................ 9 2.2.2 Vuforia ............................. 11 2.2.3 ArUco................................ 11 3 研究方法 13 3.1 研究目標................................ 13 3.2 系統流程................................ 13 3.3 相機校正................................ 14 3.4 影像資訊獲取 ........................... 15 3.4.1 彩色、深度影像取得 .................... 15 3.4.2 標記物辨識 ........................... 15 3.4.3 標記物姿態計算......................... 15 3.5 Offscreen渲染 ......................... 18 3.6 遮擋處理................................ 20 3.6.1 動態遮擋處理範圍 ...................... 21 3.6.2 計算具遮擋處理的合成圖 ................. 21 4 實驗設計 24 4.1 程式開發環境 ............................ 24 4.2 執行時間................................. 24 4.3 驗證動態遮擋處理範圍的效果................. 25 4.4 多個標記物下的使用情境.................... 26 4.5 以點雲進行遮擋處理 ...................... 27 4.6 比較其他方法 ............................ 28 4.7 驗證原始深度影像的雜訊程度................. 29 5 結論及未來研究 33 5.1 結論.................................... 33 5.2 應用.................................... 33 5.3 未來研究.................................33 References 34

    Amin, D., & Govilkar, S. (2015). Comparative study of augmented reality sdk’s. Inter- national Journal on Computational Science & Applications, 5(1), 11–26.
    Du, C., Chen, Y.-L., Ye, M., & Ren, L. (2016). Edge snapping-based depth enhancement for dynamic occlusion handling in augmented reality. In Mixed and augmented reality (ismar), 2016 ieee international symposium on (pp. 54–62).
    Durrant-Whyte, H., & Bailey, T. (2006). Simultaneous localization and mapping: part i. IEEE robotics & automation magazine, 13(2), 99–110.
    Gao, X.-S., Hou, X.-R., Tang, J., & Cheng, H.-F. (2003). Complete solution classification for the perspective-three-point problem. IEEE transactions on pattern analysis and machine intelligence, 25(8), 930–943.
    Garrido-Jurado, S., Muñoz-Salinas, R., Madrid-Cuevas, F. J., & Marín-Jiménez, M. J. (2014). Automatic generation and detection of highly reliable fiducial markers un- der occlusion. Pattern Recognition, 47(6), 2280–2292.
    Hebborn, A. K., Höhner, N., & Müller, S. (2017). Occlusion matting: Realistic occlu- sion handling for augmented reality applications. In Mixed and augmented reality (ismar), 2017 ieee international symposium on (pp. 62–71).
    Hirokazu, K. (2016). ARToolKit. Retrieved from https://artoolkit.org/ (https:// artoolkit.org/)
    Klein, G., & Drummond, T. (2004). Sensor fusion and occlusion refinement for tablet- based ar. In Proceedings of the 3rd ieee/acm international symposium on mixed and augmented reality (pp. 38–47).
    Newcombe, R. A., Izadi, S., Hilliges, O., Molyneaux, D., Kim, D., Davison, A. J., ... Fitzgibbon, A. (2011). Kinectfusion: Real-time dense surface mapping and track- ing. In Mixed and augmented reality (ismar), 2011 10th ieee international sympo- sium on (pp. 127–136).
    34Ong, K. C., Teh, H. C., & Tan, T. S. (1998). Resolving occlusion in image sequence made easy. The Visual Computer, 14(4), 153–165.
    Schmidt, J., Niemann, H., & Vogt, S. (2002). Dense disparity maps in real-time with an application to augmented reality. In Applications of computer vision, 2002.(wacv 2002). proceedings. sixth ieee workshop on (pp. 225–230).
    Tian, Y., Guan, T., & Wang, C. (2010). Real-time occlusion handling in augmented reality based on an object tracking approach. Sensors, 10(4), 2885–2900.
    Voforia. (2017). Vuforia. Retrieved from https://www.vuforia.com/ (https://www .vuforia.com/)
    Wloka, M. M., & Anderson, B. G. (1995). Resolving occlusion in augmented reality. In Proceedings of the 1995 symposium on interactive 3d graphics (pp. 5–12).

    下載圖示
    QR CODE