簡易檢索 / 詳目顯示

研究生: 沈浩
Shen, Hao
論文名稱: 視覺型同時定位與建圖系統之硬體實現
Hardware Implementation for Visual Simultaneous Localization and Mapping System
指導教授: 許陳鑑
Hsu, Chen-Chien
王偉彥
Wang, Wei-Yen
學位類別: 碩士
Master
系所名稱: 電機工程學系
Department of Electrical Engineering
論文出版年: 2018
畢業學年度: 106
語文別: 中文
論文頁數: 77
中文關鍵詞: 視覺型同時定位與建圖FPGA管線化設計平行化計算
英文關鍵詞: visual simultaneous localization and mapping, FPGA, pipeline design, parallel computing
DOI URL: http://doi.org/10.6345/THE.NTNU.DEE.007.2018.E08
論文種類: 學術論文
相關次數: 點閱:133下載:1
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 本論文針對機器人視覺型同時定位與建圖(Visual Simultaneous Localization and Mapping, V-SLAM)計算效率之問題,將V-SLAM系統透過FPGA硬體加速電路之設計,實現一低成本、低功耗與高效能的系統,讓機器人在未知環境中能夠即時的建立三維環境地圖,同時對機器人自身位置進行定位。本論文主要是將先前學長所提出的V-SLAM系統實現於FPGA上,以硬體加速電路的優勢,利用管線化設計與平行化計算等,使V-SLAM能夠即時的提供機器人的狀態以及環境地圖。為了驗證各功能模組於硬體化電路的計算速度以及精確度,本論文採用不同的實驗平台,包括個人電腦、FPGA與Nios II等,以真實環境下所拍攝的影像資訊,依照各模組的功能分別以不同角度進行測試。實驗結果顯示,相較於一般個人電腦或Nios II,FPGA硬體加速電路功能模組於特徵比對的運算效率分別提升了約390倍與16,000倍;而在精準度的測試中,2D-to-3D特徵轉換模組與重心計算模組在FPGA的運算中,相較於軟體計算結果誤差小於1%,地圖管理模組的測試則是以雙眼攝影機的參數決定近似門檻值後,以OR邏輯閘對高位元進行判斷即可得到與軟體相同之結果。從實驗結果可知,以本論文所提出之FPGA設計方法完成之V-SLAM系統可以實現即時的機器人同時定位與建圖,具備低成本與低功耗的優勢。

    This thesis addresses the problem of computational efficiency in visual simultaneous localization and mapping (V-SLAM). By implementing FPGA-based hardware acceleration to the V-SLAM system, a system design of low-cost, low power consumption, and high computational efficiency is established. This design in turn allows a robot to perform three-dimensional mapping and self-localization in an unknown environment. Building on a previous V-SLAM system, the proposed design further boosts computational efficiency and accuracy with the implementation of FPGA, taking advantage of its pipeline design and parallel computation. To validate the performances through hardware enhancement, several experimental platforms were adopted, including a typical personal computer, the proposed system with FPGA-based acceleration, and a Nios II processor. Images acquired from real environments were processed and compared in different aspects. Experimental results show that the FPGA-based system is approximately 390 and 16,000 times faster in feature matching compared to a typical PC and Nios II processor, respectively. As for accuracy comparison, the relative computational error between software and hardware of the FPGA-based system is less than 1% in terms of 2D-to-3D transformation and center of gravity estimation. The results lead to the conclusion that it is technically feasible to develop a FPGA-accelerated V-LAM system with low-cost, low power consumption, and high computational efficiency.

    摘要 i ABSTRACT ii 致謝 iv 表目錄 viii 圖目錄 x 第一章 緒論 1 1.1 研究背景與動機 1 1.2 論文架構 5 第二章 文獻探討 6 2.1 V-SLAM演算法 6 2.2 SIFT硬體實現 10 2.3 FPGA硬體實現相關技術 16 2.3.1 FIFO 16 2.3.2 Single-port RAM 18 2.3.3 SDRAM 19 2.3.4 Flash 22 2.3.5 Serial-to-Parallel & Parallel-to-Serial 模組 23 第三章 V-SLAM系統硬體加速模組之實現 25 3.1 V-SLAM硬體架構 25 3.2 硬體模組 27 3.2.1 SIFT模組 27 3.2.2 特徵比對模組 28 3.2.3 地標比對模組 33 3.2.4 2D-to-3D特徵轉換模組 34 3.2.5 地圖管理模組 36 3.2.6 重心計算模組 38 3.2.7 攝影機狀態估測模組 40 3.2.8 地圖更新模組 41 3.3 V-SLAM狀態控制器 43 3.3.1 SIFT狀態機 46 3.3.2 比對狀態機 47 3.3.3 重心計算狀態機 48 第四章 實驗平台介紹 50 4.1 硬體實驗平台 50 4.1.1 DE2-150i多媒體開發版 50 4.1.2 D5M影像擷取模組 52 4.2 軟體實驗平台 53 4.2.1 Nios II軟體實驗平台 53 4.2.2 PC軟體實驗平台 54 4.2.3 ZED雙眼深度攝影機 55 第五章 模擬與實驗結果 57 5.1 V-SLAM硬體模組效能實驗結果 57 5.1.1 特徵比對模組 57 5.1.2 地標比對模組 58 5.1.3 2D-to-3D特徵轉換模組 59 5.1.4 地圖管理模組 61 5.1.5 重心計算模組 63 5.2 物件追蹤系統(Object Tracking System) 64 5.3 各模組資源總覽 68 第六章 結論 70 6.1 結論 70 6.2 未來展望 71 參考文獻 72 自傳 75 學術成就 76

    [1] Neato botvac掃地機器人官方網站, URL: https://www.neatorobotics.com/robot-vacuum/botvac-connected-series/
    [2] iRobot 掃地機器人介紹官方網站, URL: https://www.irobot.com.tw/Home-Robots/vacuum-cleaning
    [3] F. Cao, Y. Zhuang, H. Zhang, and W. Wang, “Robust Place Recognition and Loop Closing in Laser-Based SLAM for UGVs in Urban Environments,” IEEE Sensors Journal, vol. 18, no. 10, pp. 4242-4252, May, 2018.
    [4] S. Li, and D. Lee, “RGB-D SLAM in Dynamic Environments Using Static Point Weighting,” IEEE Robotics and Automation Letters, vol. 2, no. 4, pp. 2263-2270, Oct., 2017.
    [5] Y. Li, Z. Hu, G. Huang, Z. Li, and M.-A. Sotelo, “Image Sequence Matching Using Both Holistic and Local Features for Loop Closure Detection,” IEEE Access, vol. 5, pp. 13835-13846, July, 2017.
    [6] V. Ila, L. Polok, M. Solony, and K. Istenic, “Fast Incremental Bundle Adjustment with Covariance Recovery,” International Conference on 3D Vision, Qingdao, China, pp. 175-184, Oct., 2017.
    [7] R. Zhu, C. Wang, C.-H. Lin, Z. Wang, and S. Lucey, “Object-Centric Photometric Bundle Adjustment with Deep Shape Prior,” IEEE Winter Conference on Applications of Computer Vision, Nevada, California, pp. 894-902, Mar., 2017.
    [8] D. Frost, V. Prisacariu, and D. Murray, “Recovering Stable Scale in Monocular SLAM Using Object-Supplemented Bundle Adjustment,” IEEE Transactions on Robotics, vol. 34, no. 3, pp. 736-747, June, 2018.
    [9] J. Luo, and S. Qin, “A Fast Algorithm of SLAM Based on Combinatorial Interval Filters,” IEEE Access, vol. 6, pp. 28174-28192, May, 2018.
    [10] H. Zhou, K. Ni, Q. Zhou, and T. Zhang, “An SfM Algorithm With Good Convergence That Addresses Outliers for Realizing Mono-SLAM,” IEEE Transactions on Industrial Informatics, vol. 12, no. 2, pp. 515-523, Apr., 2016.
    [11] J. A. Hesch, D. G. Kottas, S. L. Bowman, and S. I. Roumeliotis, “Consistency Analysis and Improvement of Vision-aided Inertial Navigation(KF SLAM),” IEEE Transactions on Robotics, vol. 30, no. 1, pp. 158-176, Feb., 2014.
    [12] H. Jo, H.-M. Cho, S. Jo, and E. Kim, “Efficient Grid-Based Rao–Blackwellized Particle Filter SLAM With Interparticle Map Sharing,” IEEE/ASME Transactions on Mechatronics, vol. 23, no. 2, pp. 714-724, Apr., 2018.
    [13] F. Zhang, S. Li, E. Sun, and L. Zhao, “Algorithms analysis of mobile robot SLAM based on Kalman and particle filter,” The 9th International Conference on Modelling, Identification and Control, Kunming, China, pp. 1050-1055, July, 2017.
    [14] J. Wang, and Y. Takahashi, “Particle filter based landmark mapping for SLAM of mobile robot based on RFID system,” IEEE International Conference on Advanced Intelligent Mechatronics (AIM), Banff, Canada, pp. 870-875, July, 2016.
    [15] E. Piazza, A. Romanoni, and M. Matteucci, “Real-Time CPU-Based Large-Scale Three-Dimensional Mesh Reconstruction,” IEEE Robotics and Automation Letters, vol. 3, no. 3, pp. 1584-1591, Jan., 2018.
    [16] A. Dine, A. Elouardi, B. Vincke, and S. Bouaziz, “Speeding up Graph-based SLAM Algorithm: a GPU-based heterogeneous architecture study,” IEEE 26th International Conference on Application-specific System, Architectures and Processors (ASAP), Toronto, Canada, pp. 72-73, Sep., 2015.
    [17] A. S. Vempati, I. Gilitschenski, J. Nieto, P. Beardslwy, and R. Siegwart, “Onboard Real-time Dense Reconstruction of Large-scale Environments fir UAV,” IEEE/RSJ International Conference on Intelligent Robots and System (IROS), Vancouver, BC, Canada, pp. 3479-3486, Sep., 2017.
    [18] W. Fang, Y. Zhang, B. Yu, and S. Liu, “FPGA-based ORB Feature Extraction for Real-Time Visual SLAM,” International Conference on Field Programmable Technology (ICFPT), Melbourne, VIC, Australia, Feb., 2018.
    [19] 簡江恆,“視覺型同時定位與建圖系統及其在FPGA上之實現”,國立臺灣師範大學電機工程學系,碩士論文,2017年7月
    [20] D. G. Lowe, “Object recognition from local scale-invariant features,” Proceedings Of the Seventh IEEE International Conference on Computer Vision, Kerkyra, vol. 2, pp. 1150-1157, Sep. 1999.
    [21] D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” International Journal of Computer Vision, vol. 60, no. 2, pp. 91-110, Nov., 2004.
    [22] 潘偉正,“SIFT影像辨識演算法及其在FPGA之實現”,國立臺灣師範大學電機工程學系,碩士論文,2016年7月
    [23] Terasic Corporation, URL:http://www.terasic.com.tw/tw/
    [24] Terasic, THDB-D5M_Hardware specification, Document Version 1.0, 2008.
    [25] Terasic Corporation, TRDB_D5M_UserGuide, Document Version 1.0, 2008.
    [26] StereoLABS Corporation, URL:https://www.stereolabs.com/zed/
    [27] Google Waymo自動駕駛官方網站, URL:https://waymo.com/journey/
    [28] Tesla Autopilot, URL:https://www.tesla.com/zh_TW/autopilot?redirect=no
    [29] D5M影像擷取模組MT9P001 CMOS數位影像感測器產品規格手冊, http://www.onsemi.cn/PowerSolutions/document/MT9P001-D.PDF

    下載圖示
    QR CODE