簡易檢索 / 詳目顯示

研究生: 葉芳嘉
Ye, Fang-Jia
論文名稱: 具單目視覺距離量測之演示學習仿人機器人系統
A Humanoid Robot Learning from Demonstration System with Monocular Vision-Based Distance Measurement
指導教授: 王偉彥
Wang, Wei-Yen
口試委員: 蘇順豐
Su, Shun-Feng
呂成凱
Lu, Cheng-Kai
王偉彥
Wang, Wei-Yen
口試日期: 2023/07/11
學位類別: 碩士
Master
系所名稱: 電機工程學系
Department of Electrical Engineering
論文出版年: 2023
畢業學年度: 111
語文別: 中文
論文頁數: 61
中文關鍵詞: 演示學習單目視覺距離量測系統人形機器人動作模仿系統
英文關鍵詞: learning from demonstration (LfD), monocular vision, distance measurement system, humanoid robot motion imitation system
DOI URL: http://doi.org/10.6345/NTNU202301257
論文種類: 學術論文
相關次數: 點閱:89下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 本論文主要貢獻在於提出了一種基於單目視覺距離測量的仿人機器人演示學習系統。該系統結合基於數據驅動的人體動作識別和使用鏈結向量和虛擬關節的機器人運動控制,使仿人機器人可以模仿人類的動作,此外,提出了一種具單目測距方法的類視覺里程計,該方法中提出了兩種數學模型,可使用相機平面視圖圖像和不同的相機姿態下的圖像進行距離測量,所提出的方法不需要高精度雙目攝像頭或額外的傳感器來測量距離。這種方法可以應用於各種應用領域,如物料搬運、監視和自動車輛系統,具有低成本和易於實施的額外優勢。最後,ㄧ些實際的實驗證明我們的系統對於不同的相機姿態和環境的條件下具有一定的準確度和穩定性。

    This thesis proposes a humanoid robot learning from demonstration system with monocular-vision-based distance measurement. The system combines a data-driven-based human action recognition and a robot motion control based on link vectors and virtual joints. This enables the humanoid robot to imitate human actions. Moreover, two mathematical models are proposed for distance measurement using flat view images and different camera poses. The proposed method does not require high-precision binocular cameras or additional sensors to measure the distance. Furthermore, this method can be utilized in various applications, such as material handling, surveillance, and autonomous vehicles, with the added advantages of low cost and ease of implementation. Finally, some practical experiments show that our system is accurate and robust to different camera poses and environmental conditions.

    致謝 i 摘要 ii ABSTRACT iii 目錄 iv 表目錄 vi 圖目錄 vii 第一章 緒論 1 1.1 研究動機與目的 1 1.2 文獻探討 3 1.2.1 演示學習系統探討 3 1.2.2 單目視覺測距方法 5 1.2.3 雙目視覺測距方法 6 1.2.4 聚類分析動作辨識方法 9 1.3 論文架構 11 第二章 影像測距模型 13 2.1 相機感測模型 13 2.2 影像校正 16 2.3 特徵提取及特徵匹配演算法 18 2.4 物件辨識方法 19 第三章 類視覺里程計之單目測距法 21 3.1 方法架構及流程圖 21 3.2 相機姿態無旋轉之數學模型 23 3.3 相機姿態具旋轉之數學模型 26 3.4 特徵篩選及頭部校正系統 28 第四章 基於演示學習之仿人機器人自主任務系統 30 4.1 演示學習之任務實現 30 4.2 系統流程圖 30 4.3 動作庫擴增機制 34 第五章 實驗與分析 36 5.1 實驗平台介紹 36 5.1.1 仿人機器人-ROBOTIS OP3 36 5.1.2 Kinect-v2 攝影機 37 5.2 仿人機器人之單目測距實驗 38 5.2.1 物件辨識 38 5.2.2 校正相關參數 39 5.2.3 實驗結果與誤差分析 42 5.3 仿人機器人基於演示學習之任務實驗 46 5.3.1 動作辨識模型效果分析 46 5.3.2 動作擴增效果分析 49 5.3.3 演示學習之任務實驗 51 第六章 結論 56 6.1 結論與貢獻 56 6.2 未來展望 56 參考文獻 57 學術成就 61

    [1]洪正達, “台南驚傳電擊意外!水電工開電箱被炸飛全身65%燒灼傷,” 三立新聞網, 09 04 2021. Available:https://www.setn.com/News.aspx?NewsID=923171.
    [2]吳映璠, “消防員無法滅火 歐最大核電廠爆炸威力恐比車諾比大10倍,” 中時新聞網, 04 03 2022. Available:https://www.chinatimes.com/realtimenews/20220304001473-260408?chdtv.
    [3]shark-robotics, “shark-robotics,” shark-robotics, Available:https://www.shark-robotics.com/shark-robots#colossus.
    [4]B. D. Argall, S. Chernova, M. Veloso , and B. Browning, "A survey of robot learning from demonstration," Robotics and Autonomous Systems, vol. 57, no. 5, pp. 469-483, 2009.
    [5]洪權甫, 藉由物件偵測與多動作識別之機器人演示學習系統, 國立臺灣師範大學電機工程學系, 2022.
    [6]T. Koch, L. Liebel, F. Fraundorfer, and M. Korner, “Evaluation of CNN-based single-image depth, ” in ECCV, 2018.
    [7]A. Kouris, and C.-S. Bouganis, “Learning to fly by myself: a self-supervised CNN-based approach for autonomous navigation, ” in IROS, 2019.
    [8]R.-C. Miclea, C. Dughir, A. Florin, S. Florin, and S. Ioan, “Laser and LIDAR in a system for visibility distance estimation in fog conditions, ” Sensors, vol. 20, no. 21, 5 Nov 2020.
    [9]K. Karsch, C. Liu, and S. B. Kang, “Depth Transfer: depth extraction from video using non-parametric sampling, ” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 36, no. 11, pp. 2144 - 2158, 01 Nov 2014.
    [10]劉邦禹, 以影像邊緣特徵加速光流法於視覺測距, 臺中技術學院資訊科技與應用研究所, 2010.
    [11]羅至中, 單視域之遞迴式深度估測補償, 國立交通大學電信工程研究所, 2012.
    [12]C.-C. Hsu, M.-C. Lu, W.-Y. Wang, and Y.-Y. Lu, “Distance measurement based on pixel variation of CCD images, ” ISA Trans, vol. 48, no. 4, p. 389–395, 2009.
    [13]Z. Xu, L. Wang, and J. Wang, A method for distance measurement of moving objects in a monocular image, 2018.
    [14]M. A. Mahammed, A. I. Melhum, and F. A. Kochery, “Object distance measurement by stereo vision, ” International journal of science and applied information technology, vol. 2, no. 2, pp. 5-8, 2013.
    [15]J. Jiang, L. Liu, R. Fu, Y. Yan, and W. Shao,“Non-horizontal binocular vision ranging method based on pixels,”Optical and Quantum Electronics, vol. 52, no. 4, 2020.
    [16]“Kinect for Windows SDK 2.0,” Microsoft, 2014.
    Available: https://www.microsoft.com/en-us/download/details.aspx?id=44561.
    [17]Z. Zhang, “A flexible new technique for camera calibration,”IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no. 11, pp. 1330 - 1334, 11 2000.
    [18]H. B. K. Paul, Robot vision, Cambridge, MA: MIT Press, 2003.
    [19]“OpenCV: camera calibration and 3D reconstruction,”
    Available: https://docs.opencv.org/4.x/d9/d0c/group__calib3d.html.
    [20]D. Brown, "Close-range camera calibration,”Photogramm. Eng, 2002.
    [21]“camera_calibration-ROS Wiki,”Available: http://wiki.ros.org/camera_calibration.
    [22]“OpenCV,”Available: https://opencv.org/.
    [23]D. G. Lowe, “Distinctive image features from scale-invariant keypoints, ” International Journal of Computer Vision, vol. 60, no. 2, pp. 91-110, 2004.
    [24]H. Bay, A. Ess, T. Tuytelaars, and L. Van Gool, “Speeded-up robust features (SURF), ”Computer Vision and Image Understanding, vol. 110, no. 3, pp. 346-359, 2008.
    [25]E. Rublee, V. Rabaud, K. Konolige, and G. Bradski,“Orb: an efficient alternative to SIFT or SURF, ”2011 International Conference on Computer Vision, 2011.
    [26]“Ultralytics YOLOv8,” Available: https://github.com/ultralytics/ultralytics.
    [27]劉申禾, 基於GPU平行計算之視覺里程計, 國立臺灣師範大學電機工程學系, 2017.
    [28]H. Shen, C.-C. Hsu, W.-Y. Wang, and Y.-T. Wang,“Enhanced visual odometry algorithm based on elite selection method and voting system,” in IEEE 7th International Conference on Consumer Electronics, Berlin (ICCE-Berlin), 2017.
    [29]C.-H. Lin, W.-Y. Wang, S.-H. Liu, C.-C. Hsu, and C.-C. Chien, “Heterogeneous implementation of a novel indirect visual odometry system, ” IEEE Access, vol. 3, pp. 34631-34644, March 2019.
    [30]ROBOTIS, “ROBOTIS OP3,” ROBOTIS,
    Available:https://emanual.robotis.com/docs/en/platform/op3/introduction/#introduction.
    [31]D. A. Pisner, and D. M. Schnyer, “Chapter 6 - Support vector machine, ” in Machine Learning, A. Mechelli and S. Vieira, Eds., Academic Press, 2020, pp. 101-121.
    [32]林富偉, 智慧型仿人行動機器人設計, 銘傳大學電腦與通訊工程學系碩士班, 2014.
    [33]Robotis, “Robotis,” Robotis, Available: https://www.robotis.com/.
    [34]P.-J. Hwang, C.-C. Hsu, and W.-Y. Wang, “Development of a mimic robot—learning from demonstration incorporating object detection and multiaction recognition, ” IEEE Consumer Electronics Magazine, vol. 9, no. 3, pp. 79-87, 2020.
    [35]H.-W. Ji, Y.-H. Chien, Y.-H. Lin, W.-Y. Wang, and C.-K. Lu, “Real-time human action recognition system using K-means algorithm and FNN architecture, ” in Proc. of the 2022 International Conference on Fuzzy Theory and Its Applications (iFUZZY2022), Nov. 2022.
    [36]Z. Zhang, Y. Niu, Z. Yan, and S. Lin, “Real-Time whole-body imitation by humanoid robots and task-oriented teleoperation using an analytical mapping method and quantitative evaluation, ” Applied Sciences, vol. 8, no. 10, p. 2005, 22 October 2018.
    [37]F.-J. Ye, Y.-H. Chien, M.-J. Hsu, W.-Y. Wang, C.-K. Lu, and C.-C. Hsu, “Monocular-vision-based distance estimation system for a humanoid robot, ” in Proc. of 2023 National SympNational Symposium on System Science and Engineering, June 2023.

    無法下載圖示 電子全文延後公開
    2028/08/07
    QR CODE