簡易檢索 / 詳目顯示

研究生: 許博翔
Hsu, Po-Hsiang
論文名稱: 輕量化車牌辨識模型
A Lightweight Detection Model for License Plate Recognition
指導教授: 林政宏
Lin, Cheng-Hung
口試委員: 賴穎暉 陳勇志 林政宏
口試日期: 2021/09/24
學位類別: 碩士
Master
系所名稱: 電機工程學系
Department of Electrical Engineering
論文出版年: 2021
畢業學年度: 109
語文別: 中文
論文頁數: 30
中文關鍵詞: 車牌辨識系統智慧交通物件偵測
英文關鍵詞: License plate recognition system, smart transportation, object detection
研究方法: 實驗設計法
DOI URL: http://doi.org/10.6345/NTNU202101414
論文種類: 學術論文
相關次數: 點閱:204下載:34
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 近年來,深度學習的技術被廣泛的應用在智慧交通的發展,車牌辨識系統成為智慧交通中不可或缺的技術。車牌辨識系統能應用於智能城市的車輛管理、竊盜車輛調查、犯罪車輛追蹤和交通監控。舉例來說,以往警察透過人工觀看監視器影像來追蹤犯罪車輛,人工觀看的方式需要耗費大量的時間與人力,而使用車牌辨識系統能快速的搜尋大量監視器影像裡的目標車輛,如此一來能減輕人力的負擔和節省大量的追蹤時間,提高破案的效率。
    現代車牌辨識系統的技術已經非常成熟得應用於智慧停車場、交通收費系統等場域,但想運用在路口的監視器影像仍會面臨許多問題,其中包括監視器拍攝角度、光源條件、車輛移動造成得模糊、複雜的道路環境和過多得交通號誌與廣告招牌。車牌辨識系統可以分成兩個階段,第一階段是從影像中找到車牌得位置,第二階段則是辨識前一階段找到的車牌影像。本論文僅探討第二階段的車牌字元辨識,車牌字元辨識有兩個主要的目標,第一個是找到車牌字元的位置,第二個是辨識字元的類別。傳統的車牌辨識必須先切割出字元位置才能做後續的字元辨識,所以我們使用物件偵測的概念設計了一個輕量化的車牌辨識模型,使用物件偵測的概念可將字元切割和字元辨識的任務整合在一起,只需要一個網路模型就能找出字元位置並辨識字元的類別。本論文使用的資料集是我們自己製作的台灣車牌資料集,影像都是由我們在台灣街道上拍攝不是在網路上搜尋的照片。我們也特別挑選模糊、反光、昏暗情況的車牌照片豐富我們的資料集。整個資料集總共含有 3753張照片包含訓練用的3131張照片和測試用的622張照片,而測試的照片是偏向模糊和昏暗的情況。最後實驗的結果顯示我們模型的GFLOPs只有4.91,但map0.5能達到89.62。

    In recent years, deep learning technology has been widely used in the development of smart transportation, and the license plate recognition system has become an indispensable technology in smart transportation. The license plate recognition system can be applied to smart city vehicle management, stolen vehicle investigation, criminal vehicle tracking and traffic monitoring. For example, in the past, the police used manual viewing of monitor images to track criminal vehicles. The manual viewing method required a lot of time and manpower. The license plate recognition system can quickly search for target vehicles in a large number of monitor images. It can reduce the burden of manpower and save a lot of tracking time, and improve the efficiency of solving cases.
    However, the use of the license plate recognition system to monitor at intersections still faces many problems, including the camera angle of the monitor, light source conditions, blurring caused by vehicle movement, complex road environments, and excessive traffic signs and advertising signs. In traditional license plate recognition, the character positions must be cut out before subsequent character recognition. Therefore, we use the concept of object detection to design a lightweight license plate recognition model, which integrates the task of character cutting and character recognition. Only a network is needed to find the position of the character and recognize the type of the character.
    This thesis proposes that the Taiwanese license plate data set collected by ourselves, the images are all photos taken by us on the streets of Taiwan and not searched on the Internet. We also specially select blurry, reflective, and dimly lit license plate photos to enrich our data set. The entire data set contains a total of 3753 photos, including 3131 photos for training and 622 photos for testing, and the tested photos are biased towards blurry and dim conditions. The experimental results show that the GFLOPs of our model are only 4.91, but map0.5 can reach 89.62.

    第一章 緒論 1 1.1 研究背景與動機 1 1.光影的變化: 2 2.模糊車牌: 2 3.複雜的道路環境 : 3 4.臺灣車牌規則: 4 5.其他: 4 1.2 研究目的 5 1.3 研究方法概述 5 1.4 研究貢獻 6 1.5 論文架構 6 第二章 文獻探討 8 2.1 傳統的車牌字元辨識系統 8 2.1.1字元分割 8 2.1.2 字元辨識 10 2.2 基於物件偵測方法之車牌辨識系統 11 2.2.1 Region-based CNN (R-CNN) 11 2.2.2 Fast Region-based CNN (Fast R-CNN) 13 2.2.3 Faster Region-based CNN (Faster R-CNN) 13 2.2.4 You Only Look Once (YOLO) 14 第三章 研究方法 16 3.1 車牌字元辨識架構 16 3.1.1 CNN架構 17 3.1.2 偵測層(detect layer) 18 3.1.3 損失函數(Loss function) 19 3.1.4 資料增強 20 3.2 臺灣車牌資料集 21 第四章 實驗結果 22 4.1 改變卷積核的實驗 22 4.2 資料增強的實驗 23 4.3 準確率的比較 23 4.4 可視化的測試結果 24 第五章 結論與未來展望 26 5.1 結論 26 5.2 未來展望 26 參考文獻 27 自傳 30

    [1] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: Unified, real-time object detection,” 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, 2016, pp. 779-788.
    [2] Wang, C.-Y., Bochkovskiy, A., and Liao, H.-Y. M., “Scaled-YOLOv4: Scaling Cross Stage Partial Network”, in arXiv preprint,arXiv:2011.08036, 2020.
    [3] S. Nomura, K. Yamanaka, O. Katai, H. Kawakami, and T. Shiose, “A novel adaptive morphological approach for degraded character image segmentation,” Pattern Recogn., vol. 38, pp. 1961–1975, 2005.
    [4] J. Guo and Y. Liu, “License plate localization and character segmentation with feedback self-learning and hybrid binarization techniques,” IEEE Trans. Veh. Technol., vol. 57, no. 3, pp. 1417–1424, 2008.
    [5] S. Qiao, Y. Zhu, X. Li, T. Liu, and B. Zhang, “Research on improving the accuracy of license plate character segmentation,” in Proc. Int. Conf. Front. Comp. Sci. Tech., 2010, pp. 489–493.
    [6] K. K. Kim, K. Kim, J. Kim, and H. J. Kim, “Learning-based approach for license plate recognition,” IEEE Signal Processing Society Workshop, vol. 2, pp. 614–623, 2000.
    [7] K. Miyamoto, K. Nagano, M. Tamagawa, I. Fujita, and M. Yamamoto, “Vehicle license-plate recognition by image analysis,” in Proc. Int. Conf. Ind. Electron. Control Instrum., vol. 3. 1991, pp. 1734–1738.
    [8] S. Tang and W. Li, “Number and letter character recognition of vehicle license plate based on edge Hausdorff distance,” in Proc. Int. Conf. Parallel Distributed Comput. Applicat. Tech., 2005, pp. 850–852.
    [9] M. Sarfraz, M. J. Ahmed and S. A. Ghazi, “Saudi Arabian license plate recognition system,” 2003 International Conference on Geometric Modeling and Graphics, 2003. Proceedings, 2003, pp. 36-41.
    [10] S. Lee, K. Son, B. Yoon and J. Park, “Video Based License Plate Recognition of Moving Vehicles Using Convolutional Neural Network,” 2018 18th International Conference on Control, Automation and Systems (ICCAS), Daegwallyeong, 2018, pp. 1634-1636.
    [11] Z. Selmi, M. Ben Halima and A. M. Alimi, ”Deep Learning System for Automatic License Plate Detection and Recognition,” 2017 14th IAPR International Conference on Document Analysis and Recognition (ICDAR), Kyoto, 2017, pp. 1132-1138.
    [12] R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich feature hierarchies for accurate object detection and semantic segmentation,” in Proc. of the 2014 IEEE Conference on Computer Vision and Pattern Recognition (CVPR'14), pp.580-587, 2014
    [13] R. Girshick, “Fast R-CNN,” in 2015 IEEE International Conference on Computer Vision (ICCV),Santiago,2015,pp. 1440-1448
    [14] S. Ren, K. He, R. Girshick and J. Sun, "Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks," in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 39, no. 6, pp. 1137-1149, 1 June 2017.
    [15] C. Lin and Y. Sie, “Two-Stage License Plate Recognition System using Deep learning,” in International Conference on Innovation, Communication and Engineering (ICICE), 2019, pp. 132-135.
    [16] Bochkovskiy, A., Wang, C.-Y., and Liao, H.-Y. M., “YOLOv4: Optimal Speed and Accuracy of Object Detection,” in arXiv preprint, arXiv:2004.10934 , 2020.
    [17] Springenberg, J. T., Dosovitskiy, A., Brox, T., and Riedmiller, M., “Striving for Simplicity: The All Convolutional Net”, in International Conference on Learning Representations (ICLR), 2014.
    [18] Jeon, Y. and Kim, J., “Active Convolution: Learning the Shape of Convolution for Image Classification”, in Conference on Computer Vision and Pattern Recognition(CVPR), 2017.
    [19] Boray Tek, F., Çam, İ., and Karlı, D., “Adaptive Convolution Kernel for Artificial Neural Networks”, in arXiv preprint,arXiv:2009.06385, 2020.

    下載圖示
    QR CODE