研究生: |
趙佳緯 Chao, Chia-Wei |
---|---|
論文名稱: |
基於360度環景影像之騎士進階安全輔助系統研究 Advanced Rider Assistance System (ARAS) Based on 360-Degree Panoramic Imaging |
指導教授: |
李忠謀
Lee, Greg c. |
口試委員: |
柯佳伶
Koh, Jia-Ling 江政杰 Chiang, Cheng-Chieh 李忠謀 Lee, Greg C. |
口試日期: | 2024/01/31 |
學位類別: |
碩士 Master |
系所名稱: |
資訊工程學系 Department of Computer Science and Information Engineering |
論文出版年: | 2024 |
畢業學年度: | 112 |
語文別: | 中文 |
論文頁數: | 65 |
中文關鍵詞: | 機車駕駛安全 、環景影像 、無縫拼接 、YOLO-NAS 、行人騎士分類 |
英文關鍵詞: | Motorcycle Driving Safety, Panoramic Imaging, Seamless Stitching, YOLO-NAS, Pedestrian and Rider Classification |
研究方法: | 比較研究 、 觀察研究 |
DOI URL: | http://doi.org/10.6345/NTNU202400287 |
論文種類: | 學術論文 |
相關次數: | 點閱:80 下載:4 |
分享至: |
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
機車騎士的安全行駛行為建立在對周遭環境的理解上,使其能夠應對各種突發狀況以確保行車的安全。然而,機車騎士的視野主要受限於有限的後視鏡視野和機車安全帽的限制。現有機車安全輔助系統(ARAS)的目標是提升行車安全,但其昂貴的價格使機車騎士難以改善視野盲區。此外,ARAS缺乏觀察駕駛死角的環景影像系統,使機車騎士在複雜情境中難以即時感知周遭環境,特別是在高速和複雜道路情境下,增加了事故風險。因此,本研究致力於研發一套360度環景影像系統的ARAS,以提升機車騎士的行車安全性。
本研究使用三個魚眼攝影機,固定安置於安全帽頂端,拍攝安全帽左後、前、右後側的影像,並將其拼接為360度的環景影像,以提供機車騎士因為安全帽的遮擋而困難觀察到的車後影像,提升騎乘機車的安全。研究內容包括修正魚眼影像變形、攝影機拍攝角度校正、投射影像於虛擬圓柱,再以電腦視覺技術完成三張影像的無縫拼接影像,產生360度的周遭環景影像;再來,將輔助功能整合於環景影像中,形成ARAS,輔助騎士偵測並識別周遭環境。研究內容包括低解析度物件偵測方法、行人與騎士識別、方位與距離估測,最後用緩衝方法以減少警示訊息誤報,運用物件偵測召回率、物件分類準確率的指標評估系統並加以優化。實驗結果顯示,本研究生成平滑的360度周遭環景影像,並能偵測並分類物件,在模型的偵測召回率達到80.4%,輔助功能的分類準確度達到96.9%。
The safety behavior of motorcycle riders relies on their understanding of the surrounding environment, enabling them to respond effectively to various unexpected situations and ensure safe driving. However, the limited rearview mirror visibility and constraints imposed by motorcycle helmets primarily restrict the rider's field of view. Existing Advanced Rider Assistance Systems (ARAS) aim to enhance driving safety but are hindered by their expensive cost, making it challenging for motorcycle riders to address visibility blind spots. Furthermore, ARAS lacks a panoramic imaging system to observe driving blind spots, making it difficult for riders to perceive the surrounding environment in real-time, particularly in high-speed and complex road situations, thereby increasing the risk of accidents. Therefore, this research focuses on developing an ARAS based on 360-degree panoramic imaging to improve motorcycle rider safety.
The research uses three strategically mounted fisheye cameras on the top of the helmet, capturing images of the left rear, front, and right rear sides. These images are then stitched together to create a seamless 360-degree panoramic view, addressing the challenges posed by helmet obstructions and enhancing overall motorcycle riding safety. The research involves correcting fisheye image distortions, calibrating camera shooting angles, projecting images onto a virtual cylinder, and utilizing computer vision techniques to seamlessly stitch the three images into a 360-degree panoramic view. Additionally, the study integrates auxiliary functions into the panoramic images to form an ARAS that assists riders in detecting and identifying the surrounding environment. Specific aspects covered include low-resolution object detection methods, pedestrian and rider classification, orientation and distance estimation. Finally, a buffering method is applied to reduce false warnings, evaluating the system's performance using object detection recall rate and object classification accuracy as metrics and optimizing the model accordingly. Experimental results demonstrate the generation of a smooth 360-degree panoramic view and effective object detection and classification, with the model achieving an object detection recall rate of 80.4% and an auxiliary function classification accuracy of 96.9%.
[1] 警政署統計通報:110年第50週(110年1-10月機車道路交通事故概況)。https://www.npa.gov.tw/ch/app/data/doc?module=wg057&detailNo=920473694592045056&type=s (2021.12.15)
[2] 警政署統計通報:111年第26週(110年警察機關受處理道路交通事故概況)。https://www.npa.gov.tw/ch/app/data/doc?module=wg057&detailNo=991229592792469504&type=s (2022.06.29)
[3] 維基百科:高級輔助駕駛系統。https://zh.wikipedia.org/wiki/%E9%AB%98%E7%BA%A7%E8%BE%85%E5%8A%A9%E9%A9%BE%E9%A9%B6%E7%B3%BB%E7%BB%9F
[4] Webike:摩托車安全性輔助系統進化史。https://news.webike.tw/2021/08/03/%E3%80%90%E7%86%B1%E8%AD%B0%E3%80%91%E4%BD%A0%E6%98%AF%E6%88%91%E7%9A%84%E7%9C%BC%EF%BC%81%E6%B7%BA%E8%AB%87%E6%91%A9%E6%89%98%E8%BB%8A%E5%AE%89%E5%85%A8%E6%80%A7%E8%BC%94%E5%8A%A9%E7%B3%BB%E7%B5%B1/ (2021.08.03)
[5] SHOEI OPTICSON.https://plus.webike.hk/2023/02/07/shoei-%E6%9C%80%E6%96%B0%E6%99%BA%E6%85%A7%E5%9E%8B%E5%AE%89%E5%85%A8%E5%B8%BD%E3%80%8Copticson%E3%80%8D%E6%B8%AC%E8%A9%A6%E5%BF%83%E5%BE%97/ (2023.02.07)
[6] SHOEI IT-HL. https://www.moto7.net/2018/12/shoe-ithl-smarthelmet.html (2018.12.27)
[7] JARVISH X. https://www.jarvish.shop/?locale=tw (2019.12.7)
[8] Argon Transform. https://www.supermoto8.com/articles/4802 (2019.07.30)
[9] 行人被車撞的主要因素https://www.ctwant.com/article/240712 (2023.02.26)
[10] 道安資訊查詢網。https://roadsafety.tw/AccAgeI26CauseOrder?type=%E8%82%87%E5%9B%A0%E5%88%86%E6%9E%90 (2023.09.17)
[11] J. R. Bergen, P. Anandan, K. J. Hanna, and R. Hingorani, “Hierarchical Model-Based Motion Estimation,” in Proceedings of the Second European Conference on Computer Vision, London, UK, UK: Springer-Verlag., 1992, pp. 237-252.
[12] S. Baker and I. Matthews, “Lucas-Kanade 20 Years On: A Unifying Framework,” International Journal of Computer Vision, 56(3), 2004, pp. 221-255.
[13] R. Szeliski, “Image Alignment and Stitching: A tutorial,” Foundations and Trends® in Computer Graphics and Vision, 2(1), 2007, pp. 1-104.
[14] C. Harris and M. Stephens, “A Combined Corner and Edge Detector,” in Proc. of Fourth Alvey Vision Conference, 1988, pp. 147-151.
[15] Y. Li, Y. Wang, W. Huang, and Z. Zhang, “Automatic Image Stitching Using SIFT”, in Proc. of 2008 International Conference on Audio, Language and Image Processing, 2008, pp. 568-571.
[16] H. Bay, A. Ess, T. Tuytelaars, and L. Van Gool, “Speeded-Up Robust Features (SURF),” Comput. Vis. Image Underst., 110(3), Jun. 2008, pp. 346-359.
[17] E. Rosten and T. Drummond, “Machine Learning for High-speed Corner Detection,” in Proceedings of the 9th European Conference on Computer Vision, vol. I. Berlin, Heidelberg: Springer-Verlag., 2006, pp. 430-443.
[18] Y. Keand and R. Sukthankar, “PCA-SIFT: A More Distinctive Representation for Local Image Descriptors,” in Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 2, 2004. pp. II-506-II-513.
[19] A. V. Kulkarni, J. S. Jagtap, and V. K. Harpale, “Object Recognition with ORB and Its Implementation on FPGA,” International Journal of Advanced Computer Research, 3(3), Sep. 2013, pp. 164-169.
[20] R. Szeliski, Computer Vision: Algorithms and Applications, 1st Ed., New York, NY, USA: Springer Verlag New York, Inc., 2010.
[21] Z. Tan, S. H. Zhang, and R. G. Wang, “Stable Stitching Method for Stereoscopic Panoramic Video,” CAAI Transactions on Intelligence Technology, 3(1), 2018, pp. 1-7.
[22] Y. Li, Y. Wang, W. Huang, and Z. Zhang, “Automatic Image Stitching Using SIFT,” in Proceeding of the International Conference on Audio, Language and Image Processing. 2008, pp. 568-571.
[23] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi “You Only Look Once: Unified, Real-Time Object Detection”, in Proceedings of the IEEE conference on computer vision and pattern recognition. 2016, pp. 779-788.
[24] S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks”, in IEEE Transactions on Pattern Analysis and Machine Intelligence. 2015.
[25] ARTC:騎士安全的守護者 機車先進駕駛輔助系統(ARAS)。https://www.artc.org.tw/tw/knowledge/articles/13293 (2018.08.13)
[26] SUPERMOTO8:KTM 490配備ACC(DUCATI、BMW都推出搭載ACC的車款)。https://www.supermoto8.com/articles/7319 (2021.01.04)
[27] MOTO7:2022 KAWASAKI Ninja H2 SX SE發表。https://www.moto7.net/2021/11/2022-kawasaki-h2sxse.html (2021.11.24)
[28] GitHub:Deci-Ai / super-gradients (YOLO-NAS v3.1.0)。https://github.com/Deci-AI/super-gradients (2023.05.02)