簡易檢索 / 詳目顯示

研究生: 趙佳緯
Chao, Chia-Wei
論文名稱: 基於360度環景影像之騎士進階安全輔助系統研究
Advanced Rider Assistance System (ARAS) Based on 360-Degree Panoramic Imaging
指導教授: 李忠謀
Lee, Greg C.
口試委員: 柯佳伶
Koh, Jia-Ling
江政杰
Chiang, Cheng-Chieh
李忠謀
Lee, Greg C.
口試日期: 2024/01/31
學位類別: 碩士
Master
系所名稱: 資訊工程學系
Department of Computer Science and Information Engineering
論文出版年: 2024
畢業學年度: 112
語文別: 中文
論文頁數: 65
中文關鍵詞: 機車駕駛安全環景影像無縫拼接YOLO-NAS行人騎士分類
英文關鍵詞: Motorcycle Driving Safety, Panoramic Imaging, Seamless Stitching, YOLO-NAS, Pedestrian and Rider Classification
研究方法: 比較研究觀察研究
DOI URL: http://doi.org/10.6345/NTNU202400287
論文種類: 學術論文
相關次數: 點閱:34下載:2
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 機車騎士的安全行駛行為建立在對周遭環境的理解上,使其能夠應對各種突發狀況以確保行車的安全。然而,機車騎士的視野主要受限於有限的後視鏡視野和機車安全帽的限制。現有機車安全輔助系統(ARAS)的目標是提升行車安全,但其昂貴的價格使機車騎士難以改善視野盲區。此外,ARAS缺乏觀察駕駛死角的環景影像系統,使機車騎士在複雜情境中難以即時感知周遭環境,特別是在高速和複雜道路情境下,增加了事故風險。因此,本研究致力於研發一套360度環景影像系統的ARAS,以提升機車騎士的行車安全性。
    本研究使用三個魚眼攝影機,固定安置於安全帽頂端,拍攝安全帽左後、前、右後側的影像,並將其拼接為360度的環景影像,以提供機車騎士因為安全帽的遮擋而困難觀察到的車後影像,提升騎乘機車的安全。研究內容包括修正魚眼影像變形、攝影機拍攝角度校正、投射影像於虛擬圓柱,再以電腦視覺技術完成三張影像的無縫拼接影像,產生360度的周遭環景影像;再來,將輔助功能整合於環景影像中,形成ARAS,輔助騎士偵測並識別周遭環境。研究內容包括低解析度物件偵測方法、行人與騎士識別、方位與距離估測,最後用緩衝方法以減少警示訊息誤報,運用物件偵測召回率、物件分類準確率的指標評估系統並加以優化。實驗結果顯示,本研究生成平滑的360度周遭環景影像,並能偵測並分類物件,在模型的偵測召回率達到80.4%,輔助功能的分類準確度達到96.9%。

    The safety behavior of motorcycle riders relies on their understanding of the surrounding environment, enabling them to respond effectively to various unexpected situations and ensure safe driving. However, the limited rearview mirror visibility and constraints imposed by motorcycle helmets primarily restrict the rider's field of view. Existing Advanced Rider Assistance Systems (ARAS) aim to enhance driving safety but are hindered by their expensive cost, making it challenging for motorcycle riders to address visibility blind spots. Furthermore, ARAS lacks a panoramic imaging system to observe driving blind spots, making it difficult for riders to perceive the surrounding environment in real-time, particularly in high-speed and complex road situations, thereby increasing the risk of accidents. Therefore, this research focuses on developing an ARAS based on 360-degree panoramic imaging to improve motorcycle rider safety.
    The research uses three strategically mounted fisheye cameras on the top of the helmet, capturing images of the left rear, front, and right rear sides. These images are then stitched together to create a seamless 360-degree panoramic view, addressing the challenges posed by helmet obstructions and enhancing overall motorcycle riding safety. The research involves correcting fisheye image distortions, calibrating camera shooting angles, projecting images onto a virtual cylinder, and utilizing computer vision techniques to seamlessly stitch the three images into a 360-degree panoramic view. Additionally, the study integrates auxiliary functions into the panoramic images to form an ARAS that assists riders in detecting and identifying the surrounding environment. Specific aspects covered include low-resolution object detection methods, pedestrian and rider classification, orientation and distance estimation. Finally, a buffering method is applied to reduce false warnings, evaluating the system's performance using object detection recall rate and object classification accuracy as metrics and optimizing the model accordingly. Experimental results demonstrate the generation of a smooth 360-degree panoramic view and effective object detection and classification, with the model achieving an object detection recall rate of 80.4% and an auxiliary function classification accuracy of 96.9%.

    第一章 緒論 1 1.1 研究背景與動機 1 1.2 研究目的 2 1.2.1 多鏡頭影像拼接技術強化 2 1.2.2 智慧安全輔助技術強化 3 第二章 文獻探討 4 2.1 基於特徵的重合與基於區域(直接)重合的比較 4 2.2 立體全景視訊的穩定拼接方法 5 2.3 使用SIFT進行自動影像拼接 5 2.4 基於深度學習的物件偵測和識別技術 6 2.5 現有ARAS系統的物件偵測和識別 6 第三章 系統概要與實驗規劃 8 3.1 系統架構 8 3.2 系統流程 9 3.3 實驗環境 10 3.4 裝置與校正板設計 11 3.4.1 取像攝影機 11 3.4.2 裝置模具設計 11 3.4.3 校正圖板設計 12 第四章 核心技術方法與實驗 16 4.1 魚眼糾正(correction) 16 4.1.1 魚眼成像模型 16 4.1.2 影像糾正 17 4.2 魚眼影像糾正實驗 18 4.2.1 決定視野大小RV 18 4.2.2 決定焦距f 19 4.2.3 魚眼影像糾正 20 4.3 攝影機校正(calibration) 20 4.3.1 特徵擷取 20 4.3.2 空間座標系統定義 21 4.3.3 影像座標轉換 22 4.3.4 旋轉矩陣R求取 22 4.3.5 位移向量T求取 24 4.3.6 多攝影機座標系統轉換 26 4.4 攝影機校正實驗 26 4.4.1 特徵擷取 27 4.4.2 計算轉換R、T外部參數 27 4.5 圓柱投影 28 4.5.1 決定投影圓柱中心軸 28 4.5.2 建立圓柱面 28 4.5.3 圓柱影像大小的推導 29 4.5.4 取得圓柱面上的像素 30 4.6 圓柱影像投影實驗 33 4.7 影像拼接 33 4.7.1 決定最佳拼接位置 33 4.7.2 擷取拼接影像呈現範圍 35 4.7.3 建立轉換表 35 4.7.4 建立參考標線 35 4.8 相鄰影像拼接實驗 36 4.8.1 影像分割 36 4.8.2 相鄰影像拼接及調整 37 4.8.3 突起部分的環景影像切割 37 4.9 即時產生360度環景影像實驗 38 4.9.1 對照表產生 38 4.9.2 查表產生360度環景影像 38 4.9.3 環景影像分割 39 4.9.4 平滑相鄰影像拼接處 39 第五章 整合技術與實驗 41 5.1 物件偵測 41 5.1.1 模型選擇與訓練 41 5.1.2 平滑處理 41 5.2 物件偵測實驗 41 5.2.1 模型選擇 41 5.2.2 偵測類別選擇 42 5.2.3 物件偵測參數設定 42 5.3 行人偵測 43 5.3.1相對位置判別 43 5.3.2 邊界框大小比例判別 44 5.3.3 重疊比例判別 44 5.4 行人偵測功能製作 44 5.4.1 使用模型進行人與機車偵測 44 5.4.2 抓取物件bounding box的位置 44 5.4.3相對位置判別 45 5.4.4 邊界框大小比例判別 45 5.4.5 重疊比率判別 45 5.4.6 騎士與行人分類 46 5.5 盲點偵測 46 5.5.1 切割影像區域 46 5.5.2 物件距離估測 47 5.5.3 警報提醒 47 5.6 盲點偵測功能製作 47 5.6.1 使用模型進行車輛偵測 47 5.6.2 影像區域切割 47 5.6.3 偵測結果平滑化 47 5.6.4 距離估測 48 5.6.5 警告提示 48 5.7 系統整合 49 5.7.1 建立Socket伺服器 49 5.7.2 影像轉換為Byte陣列 49 5.7.3 將影像資訊傳送至伺服器 49 5.7.4 伺服器端影像分類 49 5.7.5 伺服器端接收並處理影像 49 5.7.6 伺服器端處理完成後回傳至C#客戶端 50 5.7.7 C#客戶端接收並解析伺服器回傳 50 5.7.8 顯示結果 50 5.8 測試資料集建立 50 5.8.1 影片蒐集 50 5.8.2 分幀評估 50 5.8.3 切割秒數定義 51 5.9 系統效能評估和優化 55 5.9.1 執行速度 55 5.9.2 紀錄預測結果 55 5.9.3 偵測結果驗證與評估 56 5.9.4 分類準確度 56 5.9.5 各項指標分析 57 第六章 結論與未來方向 59 6.1 結論 59 6.2 未來方向 60 參考文獻 61 附錄一 標註和紀錄影片資訊 64 附錄二 測試集ground truth建立與標註 65

    [1] 警政署統計通報:110年第50週(110年1-10月機車道路交通事故概況)。https://www.npa.gov.tw/ch/app/data/doc?module=wg057&detailNo=920473694592045056&type=s (2021.12.15)
    [2] 警政署統計通報:111年第26週(110年警察機關受處理道路交通事故概況)。https://www.npa.gov.tw/ch/app/data/doc?module=wg057&detailNo=991229592792469504&type=s (2022.06.29)
    [3] 維基百科:高級輔助駕駛系統。https://zh.wikipedia.org/wiki/%E9%AB%98%E7%BA%A7%E8%BE%85%E5%8A%A9%E9%A9%BE%E9%A9%B6%E7%B3%BB%E7%BB%9F
    [4] Webike:摩托車安全性輔助系統進化史。https://news.webike.tw/2021/08/03/%E3%80%90%E7%86%B1%E8%AD%B0%E3%80%91%E4%BD%A0%E6%98%AF%E6%88%91%E7%9A%84%E7%9C%BC%EF%BC%81%E6%B7%BA%E8%AB%87%E6%91%A9%E6%89%98%E8%BB%8A%E5%AE%89%E5%85%A8%E6%80%A7%E8%BC%94%E5%8A%A9%E7%B3%BB%E7%B5%B1/ (2021.08.03)
    [5] SHOEI OPTICSON.https://plus.webike.hk/2023/02/07/shoei-%E6%9C%80%E6%96%B0%E6%99%BA%E6%85%A7%E5%9E%8B%E5%AE%89%E5%85%A8%E5%B8%BD%E3%80%8Copticson%E3%80%8D%E6%B8%AC%E8%A9%A6%E5%BF%83%E5%BE%97/ (2023.02.07)
    [6] SHOEI IT-HL. https://www.moto7.net/2018/12/shoe-ithl-smarthelmet.html (2018.12.27)
    [7] JARVISH X. https://www.jarvish.shop/?locale=tw (2019.12.7)
    [8] Argon Transform. https://www.supermoto8.com/articles/4802 (2019.07.30)
    [9] 行人被車撞的主要因素https://www.ctwant.com/article/240712 (2023.02.26)
    [10] 道安資訊查詢網。https://roadsafety.tw/AccAgeI26CauseOrder?type=%E8%82%87%E5%9B%A0%E5%88%86%E6%9E%90 (2023.09.17)
    [11] J. R. Bergen, P. Anandan, K. J. Hanna, and R. Hingorani, “Hierarchical Model-Based Motion Estimation,” in Proceedings of the Second European Conference on Computer Vision, London, UK, UK: Springer-Verlag., 1992, pp. 237-252.
    [12] S. Baker and I. Matthews, “Lucas-Kanade 20 Years On: A Unifying Framework,” International Journal of Computer Vision, 56(3), 2004, pp. 221-255.
    [13] R. Szeliski, “Image Alignment and Stitching: A tutorial,” Foundations and Trends® in Computer Graphics and Vision, 2(1), 2007, pp. 1-104.
    [14] C. Harris and M. Stephens, “A Combined Corner and Edge Detector,” in Proc. of Fourth Alvey Vision Conference, 1988, pp. 147-151.
    [15] Y. Li, Y. Wang, W. Huang, and Z. Zhang, “Automatic Image Stitching Using SIFT”, in Proc. of 2008 International Conference on Audio, Language and Image Processing, 2008, pp. 568-571.
    [16] H. Bay, A. Ess, T. Tuytelaars, and L. Van Gool, “Speeded-Up Robust Features (SURF),” Comput. Vis. Image Underst., 110(3), Jun. 2008, pp. 346-359.
    [17] E. Rosten and T. Drummond, “Machine Learning for High-speed Corner Detection,” in Proceedings of the 9th European Conference on Computer Vision, vol. I. Berlin, Heidelberg: Springer-Verlag., 2006, pp. 430-443.
    [18] Y. Keand and R. Sukthankar, “PCA-SIFT: A More Distinctive Representation for Local Image Descriptors,” in Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 2, 2004. pp. II-506-II-513.
    [19] A. V. Kulkarni, J. S. Jagtap, and V. K. Harpale, “Object Recognition with ORB and Its Implementation on FPGA,” International Journal of Advanced Computer Research, 3(3), Sep. 2013, pp. 164-169.
    [20] R. Szeliski, Computer Vision: Algorithms and Applications, 1st Ed., New York, NY, USA: Springer Verlag New York, Inc., 2010.
    [21] Z. Tan, S. H. Zhang, and R. G. Wang, “Stable Stitching Method for Stereoscopic Panoramic Video,” CAAI Transactions on Intelligence Technology, 3(1), 2018, pp. 1-7.
    [22] Y. Li, Y. Wang, W. Huang, and Z. Zhang, “Automatic Image Stitching Using SIFT,” in Proceeding of the International Conference on Audio, Language and Image Processing. 2008, pp. 568-571.
    [23] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi “You Only Look Once: Unified, Real-Time Object Detection”, in Proceedings of the IEEE conference on computer vision and pattern recognition. 2016, pp. 779-788.
    [24] S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks”, in IEEE Transactions on Pattern Analysis and Machine Intelligence. 2015.
    [25] ARTC:騎士安全的守護者 機車先進駕駛輔助系統(ARAS)。https://www.artc.org.tw/tw/knowledge/articles/13293 (2018.08.13)
    [26] SUPERMOTO8:KTM 490配備ACC(DUCATI、BMW都推出搭載ACC的車款)。https://www.supermoto8.com/articles/7319 (2021.01.04)
    [27] MOTO7:2022 KAWASAKI Ninja H2 SX SE發表。https://www.moto7.net/2021/11/2022-kawasaki-h2sxse.html (2021.11.24)
    [28] GitHub:Deci-Ai / super-gradients (YOLO-NAS v3.1.0)。https://github.com/Deci-AI/super-gradients (2023.05.02)

    下載圖示
    QR CODE