簡易檢索 / 詳目顯示

研究生: 黃騰緯
Huang, Teng-Wei
論文名稱: 具高運算效率之單攝影機視覺型同時定位與建圖系統
High Performance Visual Simultaneous Localization and Mapping based on a Single Camera
指導教授: 包傑奇
Jacky Baltes
學位類別: 碩士
Master
系所名稱: 電機工程學系
Department of Electrical Engineering
論文出版年: 2016
畢業學年度: 104
語文別: 中文
論文頁數: 80
中文關鍵詞: 同時定位與建圖FastSLAM影像式距離量測單一攝影機移動式機器人
英文關鍵詞: SLAM, FastSLAM, image-based measuring system, single camera, mobile robot
DOI URL: https://doi.org/10.6345/NTNU202204375
論文種類: 學術論文
相關次數: 點閱:194下載:5
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報

解決同時定位與建圖問題最常用的方法是FastSLAM演算法。而FastSLAM2.0的運算效率雖比EKF-SLAM來的高,但FastSLAM2.0會隨著探索時間的增加,比對先前所見地標資訊的次數變多,導致運算效能降低。故本論文對此提出一改良方法,稱之為「具高運算效率同時定位與建圖演算法(ROSLAM)」,在預測機器人位置時可視當前粒子收斂情形,決定是否使用感測器資訊更新機器人位置;並且在每一時刻檢查範圍內的地標數量是否與前一刻相同來判斷是否更新地標。模擬結果顯示,本論文所提出的演算法較FastSLAM2.0與CESLAM皆有兩倍或更高的運算速度提升,並且維持一定精準度。另外,由於SLAM演算法常常搭配雷射感測器來完成任務,但較好的雷射感測器不僅重量可觀,價格也不斐。因此本論文提出一影像特徵量測系統,使用單一攝影機搭配影像處理的方式,將地面與非地面物的邊緣特徵找出並計算該特徵點與機器人的距離,來達到取代傳統雷射感測器的目的。實驗結果證明,本系統除了可以單獨使用於距離量測外,也可與ROSLAM結合,成為具高運算效率之單攝影機視覺型同時定位與建圖系統。

FastSLAM is one of the most popular algorithms for solving the simultaneous localization and mapping problem. Though the effectiveness of FastSLAM2.0 is better than that of EKF-SLAM, its performance tends to slow down as the number of landmarks increases. Therefore, this thesis proposes an improved version, called Rapid Operation SLAM (ROSLAM), which uses the convergence of the particles to decide whether the current measurements should be used to update the robot’s pose or not. ROSLAM checks the number of landmarks as compared to the previous one and uses this information to decide whether to update a landmark’s position or not. Empirical evaluation using both simulation and practical experiments shows that ROSLAM is 50% faster than FastSLAM while maintaining similar accuracy of the map.
Since laser scan provides clear 3D points clouds, most SLAM research uses laser scanners in spite of the high cost, weight, and power consumption. As a result, a visual SLAM system is proposed in this thesis. First, image processing technique is adopted to find the feature points between obstacles and the floor, so as to calculate the distances between the robot and the feature points. Experimental results show that the proposed system provides sufficient information for a practical visual SLAM system.

摘   要 i ABSTRACT ii 致   謝 iii 圖 目 錄 vii 表 目 錄 ix 第一章、緒論 1 1.1 研究動機與背景 1 1.2 研究目的 4 1.3 論文架構 7 第二章、文獻探討 8 2.1 卡爾曼濾波器 8 2.2 蒙地卡羅定位法 10 2.3 SLAM演算法 13 2.3.1 FastSLAM 1.0 13 2.3.2 FastSLAM 2.0 17 2.3.3 具有高計算效率之同時定位與建圖演算法(CESLAM) 21 第三章、具高運算效率同時定位與建圖演算法(ROSLAM) 25 第四章、具高運算效率之單攝影機視覺型同時定位與建圖演算法 33 4.1 視覺地標 33 4.1.1 地面邊緣特徵偵測演算法 34 4.1.2 地面邊緣特徵點平滑化 37 4.1.3 使用隨機抽樣一致(RANSAC)減少特徵點 38 4.2 影像式距離量測系統 40 4.3 相機傾角確認系統 42 4.3.1 相機傾角確認系統誤差分析 45 4.4 V-ROSLAM系統 46 第五章、實驗結果 48 5.1 實驗設備 48 5.2 ROSLAM模擬結果 50 5.3 相機傾角確認系統實驗 54 5.3.1 相機傾角確認系統誤差分析結果 57 5.4 影像特徵量測系統實驗 58 5.5 V-ROSLAM地面基準(ground truth)點測試 63 5.6 V-ROSLAM實驗 66 5.7 實驗討論 70 第六章、結論與未來展望 71 6.1 結論 71 6.2 未來展望 72 參考文獻 73 自傳 78 學術成就 79

[1]. H. Durrant-Whyte and T. Bailey, “Simultaneous localization and mapping: part I,” IEEE Robot Automation Magazine, vol. 13, no. 2, pp. 99-110, June 2006.
[2]. J. J. Leonard and H. F. Durrant-Whyte, “Mobile robot localization by tracking geometric beacons,” IEEE Transactions on Robotics and Automation, vol. 7, no. 3, pp. 376-382, June 1991.
[3]. H. Durrant-Whyte, D. Rye, and E. Nebot, “Localization of automatic guided vehicles,” in Proc. of 7th International Symposium on Robotics Research, New York, Oct. 1995, pp. 613-625.
[4]. D. G. Lowe, “Object recognition from local scale-invariant features,” in Proc. of International Conference of Computer Vision, Kerkyra, Sep. 1999, pp. 1150-1157.
[5]. P. Viola and M. Jones, “Rapid object detection using a boosted cascade of simple features,” Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 2, pp. 1403-1410, 2003.
[6]. S. Lee, and S. Lee, “Embedded Visual SLAM Applications for Low-Cost Consumer Robots”, IEEE Robotics and Automation Magazine, pp. 83-95, Dec. 2013.
[7]. P. Benavidez, M. Muppidi, P. Rad, J. J. Prevost, M. Jamshidi, and L. Brown, “Cloud-Based Realtime Robotic Visual SLAM,” in Proc. of IEEE International System Conference, Vancouver, BC, Apr. 2015, pp.773-777.
[8]. V. Gay-Bellie, S. Bourgeois, and M. Dhome, “Vision-Based Differential GPS: Improving VSLAM / GPS fusion in urban environment with 3D building models,” in Proc. of International Conference on 3D vision, Tokyo, Dec. 2014, pp. 432-439.
[9]. D. Schleicher, L. M. Bergasa, M. Ocana, R. Barea, and M. E. Lopez, “Real-Time Hierarchical Outdoor SLAM Based on Stereovision and GPS Fusion,” IEEE Transactions on Intelligent Transportation System, vol. 10, no. 3, pp. 440-452, Sept. 2009.
[10]. G. Welch, and G. Bishop, “An introduction to the Kalman filter,” In: http://www.cs.unc.edu, UNC-ChapelHill, TR95-041, 2000.
[11]. A. Chatterjee, “Differential evolution tuned fuzzy supervisor adapted, extended Kalman filtering for SLAM problems in mobile robots,” Robotica, vol. 27, pp. 411-423, May 2009.
[12]. A. Chatterjee and F. Matsuno, “A Geese PSO tuned fuzzy supervisor for EKF based solutions of simultaneous localization and mapping (SLAM) problems in mobile robots,” Expert Systems with Applications, vol. 37, pp. 5542-5548, Aug. 2010.

[13]. M. Montemerlo, S. Thrun, D. Koller, and B. Wegbreit, “FastSLAM: A factored solution to the simultaneous localization and mapping problem,” in Proc. of AAAI National Conference on Artificial Intelligence, Edmonton, Canada, July. 2002, pp. 593-598.
[14]. A. Stentz, D. Fox and M. Montemerlo, “FastSLAM: A factored solution to the simultaneous localization and mapping problem with unknown data association,” in Proc. of the AAAI National Conference on Artificial Intelligence, Edmonton, Canada July 2002, pp. 593-598.
[15]. K. Murphy, “Bayesian map learning in dynamic environments,” Neural Information Proceedings System, vol. 12, pp. 1015-1021, 2000.
[16]. M. Montemerlo and S. Thrun, “Simultaneous localization and mapping with unknown data association using FastSLAM,” in Proc. of IEEE Intelligence Conference Robotics and Automation, Taipei, Taiwan, Sept. 2003, pp. 1985-1991.
[17]. M. Montemerlo, S. Thrun, D. Koller, and B. Wegbreit, “FastSLAM 2.0: An improved particle filtering algorithm for simultaneous localization and mapping that provably converges,” in Proc. of the 16th Int. Joint Conf. on Artificial Intelligence, Acpulco, Mexico, Aug. 2003, pp. 1151-1156.
[18]. C.-K. Yang, C.-C. Hsu, and T.-T. Wang “Computationally efficient algorithm for simultaneous localization and mapping,” in Proc. of Networking, Sensing and Control, Evry, Apr. 2013, pp. 328-332.
[19]. 陳雨政,分離更新式FastSLAM之設計與實現,博士論文,機械與機電工程學系,民國101年
[20]. H. X. Duan, M. Lin, Y. F. Sheang, and C. P. Hu, “Calibrating Focal Length for Paracatadioptric Camera from One Circle Image,” in Proc. of International Conference on Computer Vision Theory and Applications, Lisbon, Portugal, Jan. 2014, pp. 56-63.
[21]. A. J. Davison, I. D. Reid, N. D. Molton, and O. Stasse, “MonoSLAM:real-time single camera SLAM,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 29, no. 6, pp. 1052-1067, June 2007.
[22]. F. Xu and Z. Wang, “An embedded visual SLAM algorithm based on Kinect and ORB features,” in Proc. of Control Conference, Hangzhou, July 2015, pp. 6026-6031.
[23]. M. F. Abdul Ghani, K. S. Mohamed Sahari, and L.-C. Kiong, ” Improvement of the 2D SLAM system using Kinect Sensor for Indoor mapping,” in Proc. of SCIS&ISIS, Kitakyushu, Dec. 2014, pp. 776-781.
[24]. T. Emter, A. Stein, “Simultaneous Localization and Mapping with the Kinect sensor,” in Proc. of ROBOTIK, Munich, Germany, May 2012, pp. 239-244.
[25]. C. Mei, G. Sibley, M. Cummins, P. Newman, and I. Reid, “A constant time efficient stereo SLAM system,” in Proc. of the British Machine Vision Conference, Sept. 2009, pp. 1-9.
[26]. S. Ahn, K. Lee,W. K. Chung, and S. R. Oh, “SLAM with Visual Plane: Extracting Vertical Plane by Fusing Stereo Vision and Ultrasonic Sensor for Indoor Environment, ” in Proc. of Robotics and Automation, Roma, Apr. 2007, pp. 4787-4794.
[27]. H. A. Kadir and M. R. Arshad, ”Features detection and matching for visual simultaneous localization and mapping (VSLAM),” in Proc. of Control System, Computing and Engineering (ICCSCE), Mindeb, Dec. 2013, pp. 40-45.
[28]. C. Harris and M. Stephens, “A Combined Corner and Edge Detector,” in Proc. of the Fourth Alvey Vision Conference, Manchester, UK, Sept. 1988, pp. 147-151.
[29]. R. O. Duda, and, P. E. hart, “Use of the Hough transformation to detect lines and curves in pictures,” Communications of the ACM, vol. 15, no. 1, pp. 11-15, 1972.
[30]. A.J. Davison, I.D. Reid, N.D. Molton, and O. Stasse, “MonoSLAM: real-time single camera SLAM,” IEEE Computer Society, vol. 29, no. 6, pp. 1052-1067, 2007.
[31]. F. Fraundorfer, and D. Scaramuzza, “Visual Odometry Part II: matching, robustness, optimization, and applications,” IEEE Robotics and Automation Society vol. 19, no. 2, pp. 78-90, 2012.
[32]. C. C. Hsu, M. C. Lu, and Y. Y. Lu, “Distance and Angle Measurement of Objects on an Oblique Plane Based on Pixel Number Variation of CCD Images,” IEEE Transaction on Instrumentation and Measurement, vol. 60, no. 5, pp. 1779-1794, 2011.
[33]. P. Henry, M. Krainin, E. Herbst, X. Ren, and D. Fox, “RGB-D Mapping: Using depth cameras for dense 3D modeling of indoor environments,” in the 12th International Symposium on Experimental Robotics, vol. 79. pp. 477-491, 2010.
[34]. S. Matyunin, D. Vatolin, Y. Berdnikov, and M. Smirnov, “Temporal filtering for depth maps generated by Kinect depth camera”, in Proc. of IEEE 3DTV Conferences, Antalya, May 2011, pp. 1-4.
[35]. G.-H. Kuo, C.-Y. Cheng, and C.-J. Wu, “Design and implementation of a remote monitoring cleaning robot,” in Proc. of CACS International Automatic Control Conference, Kaohsiung, Taiwan, Nov. 2014, pp. 281-286.
[36]. R. E. Kalman, “A New Approach to Linear Filtering and Prediction Problems,” Transactions of the ASME – Journal of Basic Engineering, no. 82, pp. 35-45, 1960.

[37]. F.Caron, E. Duflos, D. Pomorski, and P. Vanheeghe, “GPS/IMU Data Fusion using Multisensor Kalman Filtering : Introduction of Contextual Aspects,” in Proc. of Information Fusion, Florence, June 2006, pp. 221-230.
[38]. Z. Shaik, and V. Asari, “A Robust Method for Multiple Face Tracking Using Kalman Filter,” in Proc. of IEEE Applied Imagery Pattern Recognition Workshop, Washington, DC, Oct. 2007, pp. 125-130.
[39]. S. Mo, S. H. Cheng, and X. F. Xing, “Hand Gesture Segmentation Based on Improved Kalman Filter and TSL Skin Color Model,” in Proc. of International Conference on Multimedia Technology, Hangzhou, July 2011, pp. 3543-3546.
[40]. E. Dikici, and F. Orderud, “Graph-Cut Based Edge Detection for Kalman Filter Based Left Ventricle Tracking in 3D+T Echocardiography,” Computing in Cardiology, pp. 205-208, 2010.
[41]. P. S. Maybeck, Stochastic Models, Estimation, and Control vol. 1, Academic Press, Inc., 1979.
[42]. R. G. Brown and P. Y. C. Hwang, Introduction to Random Signals and Applied Kalman Filtering, Second Edition, John Wiley & Sons, 1992.
[43]. A. Doucet, N. De Freitas, and N.J. Gordon, Sequential Monte Carlo Methods in Practice, Springer, 2001.
[44]. Y. C. Ho and R. Lee, “A Bayesian approach to problems in stochastic estimation and control,” IEEE Trans. on Automatic Control, vol. 9, no. 4, pp. 333-339, 1964.
[45]. M.S. Arulampalam, S. Maskell, N. Gordon, and T. Clapp, “A tutorial on particle filters for online nonlinear/non-Gaussian Bayesian tracking,” IEEE Transactions on Signal Processing, vol. 50, no. 2, pp. 174-188, 2002.
[46]. G. Kitagawa, “Monte Carlo filter and smoother for non-Gaussian nonlinear state space models,” Journal of Computational and Graphical Statistics, vol. 5, no. 1, pp. 1-25, 1996.
[47]. F. Dellaert, D. Fox, W. Burgard, and S. Thrun, “Monte Carlo localization for mobile robots,” in Proc. of IEEE International Conference on Robotics and Automation, Detroit, MI, May 1999, pp. 1322-1328.
[48]. S. Thrun, W. Burgard, and D. Fox, Probabilistic Robotics. the MIT Press, 2005.
[49]. I. Rekleitis. A Particle Filter Tutorial for Mobile Robot Localization. Canada, 2004.
[50]. C.-C. Hsu, C.-C. Wong, H.-C. Teng, and C.-Y. Ho, “Localization of Mobile Robots via an Enhanced Particle Filter Incorporating Tournament Selection and Nelder-Mead Simplex Search,” International Journal of Innovative Computing, Information and Control, vol. 7, no. 7A, pp. 3725-3737, 2011
[51]. J. S. Liu et al., A theoretical framework for sequential importance sampling and resampling. New York, 2001.
[52]. M. A. Fischler, and R. C. Bolles, “Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography,” Communications of the ACM, vol. 24, pp. 381-395, 1981.

下載圖示
QR CODE