研究生: |
王千瑞 Wang, Chien-Jui |
---|---|
論文名稱: |
基於深度學習之職安監測系統開發 Development of occupational safety monitoring system based on deep learning |
指導教授: |
吳順德
Wu, Shuen-De |
口試委員: |
呂有勝
Lu, Yu-Sheng 劉益宏 Liu, Yi-Hung 吳順德 Wu, Shuen-De |
口試日期: | 2023/07/13 |
學位類別: |
碩士 Master |
系所名稱: |
機電工程學系 Department of Mechatronic Engineering |
論文出版年: | 2023 |
畢業學年度: | 111 |
語文別: | 中文 |
論文頁數: | 61 |
中文關鍵詞: | 職安監測 、深度學習 、人工智慧 |
英文關鍵詞: | Occupational safety monitoring, Deep learning, Artificial intelligence, YOLOv7, LINE Notify |
研究方法: | 準實驗設計法 、 比較研究 |
DOI URL: | http://doi.org/10.6345/NTNU202300854 |
論文種類: | 學術論文 |
相關次數: | 點閱:105 下載:16 |
分享至: |
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
在台灣,每年施工造成意外的比例與職業傷害皆位居前茅,對勞工的生命與產業的生產力造成重大影響,其中勞工不安全行為是意外發生的首要原因。防制此行為的傳統方式是在施工現場架設監視器或派人監工,但由於人力問題,監督的效果與效率並不理想,基於此本研究開發以深度學習為基礎之職安監測系統來協助施工現場的職安管理。
科技的進步大幅提升影像辨識能力與速度,本研究利用經過模型架構優化和訓練過程優化的新穎物件偵測器YOLOv7,針對施工現場影像進行訓練並建立職安狀態辨識模型後,對施工中的影像進行偵測,將未符合職安規定的事件篩選出來,最後將辨識結果以LINE Notify即時通報。與YOLOv5演算法進行比較,YOLOv7模型在演算法有改進之外,本研究透過訓練資料集的修正與增加以及模型的重新訓練等方式改善職安監測系統的辨識能力,使模型的mAP提升了約4%。
本研究所建立的辨識模型在訓練階段的最佳mAP@.5高達0.98,此高mAP@.5表示可減少誤報與漏報情況的發生。誤報率太高會造成現場施工的困擾,並對通報失去信心;漏報率太高代表違反職安事件的偵測效果不彰,此將影響即時預警的功能。高mAP@.5所帶來的效益將提升施工現場的安全管理,減少意外的發生,強化本研究在產業實務應用的可行性與價值性。
In Taiwan, construction accidents and occupational injuries rank among the highest each year, causing significant impact on both workers' lives and industrial productivity. Unsafe behaviors by workers are identified as the primary cause of these accidents. The traditional approach to prevent such behavior involves setting up surveillance cameras or assigning personnel to supervise construction sites. However, due to manpower limitations, the effectiveness and efficiency of supervision are not optimal. Therefore, this study utilizes artificial intelligence recognition models to assist in occupational safety management at construction sites.
Technological advancements have significantly improved the capabilities and speed of image recognition. In this study, a novel object detector, YOLOv7, optimized in terms of model architecture and training process, was utilized. The detector was trained on construction site images to establish a occupational safety state recognition model. It was then applied to detect and filter out events that did not comply with occupational safety regulations in the construction images. Finally, the recognition results were instantly notified through LINE Notify. Compared to the YOLOv5 algorithm, the YOLOv7 model not only incorporates algorithmic improvements but also enhances the recognition capabilities of the occupational safety monitoring system through modifications and additions to the training dataset, as well as retraining of the model. As a result, the mAP (mean Average Precision) of the model has been improved by approximately 4%.
The recognition model developed in this study achieved a peak mAP@.5 of 0.98 during the training phase. This high mAP@.5 value indicates a reduced occurrence of false positives and false negatives. Excessive false positives can cause disruptions at the construction site and undermine confidence in the notification system, while a high false negative rate implies ineffective detection of occupational safety violations, which compromises the real-time alerting functionality. The benefits of a high mAP@.5 value include improved safety management at construction sites, reduced incidents, and enhanced feasibility and value of practical application in the industry.
[1] 勞動部職業安全衛生署110年勞動檢查統計年報 https://www.osha.gov.tw/48110/48331/48333/48339/132750/
[2] Y. Li, H. Wei, Z. Han, J. Huang, and W. Wang, “Deep learning-based safety Helmet detection in engineering management based on convolutional neural networks,” Advances in Civil Engineering, 2020, pp. 1-10, 2020.
[3] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C.Y. Fu, and A.C. Berg, “SSD: single shot multibox detector,” Proceedings of the European Conference on Computer Vision (ECCV), pp. 21-37, 2016.
[4] T. Diwan, G. Anirudh, and J.V. Tembhurne, “Object detection using YOLO: Challenges, architectural successors, datasets and applications,” Multimedia Tools and Applications, 82, pp. 9243-9275, 2023.
[5] J. Du, “Understanding of object detection based on CNN family and YOLO,” Journal of Physics: Conference Series, 1004, 2018.
[6] F. Zhou, H. Zhao, and Z. Nie, “Safety helmet detection based on YOLOv5,” Proceedings of the IEEE International Conference on Power Electronics, Computer Applications (ICPECA), Shenyang, China, pp. 6-11, 2021.
[7] 游孟修, “基於深度學習與物聯網之道路施工職安監控系統,” 國立臺灣師範大學機電工程學系碩士論文,2022.
[8] P. Bharati and A. Pramanik, “Deep learning techniques—R-CNN to mask R-CNN: A survey,” Computational Intelligence in Pattern Recognition, 999, pp. 657-668, 2020.
[9] R. Girshick, “Fast r-cnn,” Proceedings of the IEEE International Conference on Computer Vision, pp. 1440-1448, 2015.
[10] S. Ren, K. He, R. Girshick, and J. Sun, “Faster r-cnn: Towards real-time object detection with region proposal networks,” Advances in Neural Information Processing Systems, 28, pp. 1-9, 2015.
[11] J. Redmon, S. Divvala, R. Grishick, and A. Farhadi, “You only look once: Unified, real-time object detection,” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 779-788, 2016.
[12] J. Redmon and A. Farhadi, “Yolo9000: better, faster, stronger,” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7263-7271, 2017.
[13] J. Redmon and A. Farhadi, “Yolov3: An incremental improvement,” arXiv:1804.02767, 2018.
[14] A. Bochkovskiy, C.Y. Wang, and H.Y.M. Liao, “Yolov4: Optimal speed and accuracy of object detection,” arXiv:2004.10934, 2020.
[15] C. Li, L. Li, H. Jiang, K. Weng, Y. Geng, L. Li, Z. Ke, Q. Li, M. Cheng, W. Nie, Y. Li, B. Zhang, Y. Liang, L. Zhou, X. Xu, X. Chu, X. Wei, and X. Wei, “Yolov6: A single-stage object detection framework for industrial applications,” arXiv:2209.02976, 2022.
[16] C.Y. Wang, A. Bochkovskiy, and H.Y.M. Liao, “YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors,” Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7464-7475, 2023.
[17] J.R. Terven and D.M. Cordova-Esparza, “A comprehensive review of YOLO: From YOLOv1 to YOLOv8 and beyond,” arXiv:2304.00501, 2023.
[18] N. D. T. Yung, W. K. Wong, F. H. Juwono, and Z. A. Sim, “Safety helmet detection using deep learning: Implementation and comparative study using YOLOv5, YOLOv6, and YOLOv7,” Proceedings of the International Conference on Green Energy, Computing and Sustainable Technology (GECOST), Miri Sarawak, Malaysia, pp. 164-170, 2022.
[19] K. Chen, G. Yan, M. Zhang, Z. Xiao, and Q. Wang, “Safety helmet detection based on YOLOv7,” Proceedings of the 6th International Conference on Computer Science and Application Engineering (CSAE '22). Association for Computing Machinery, NY, USA, Article 31, 1-6, 2022.
[20] X. Chen and Q. Xie, “Safety helmet-wearing detection system for manufacturing workshop based on improved YOLOv7,” Journal of Sensors, 2023, pp. 1-14, 2023.
[21] B.P. Athidhi and P. Smitha Vas, “YOLOv7-based model for detecting safety helmet wear on construction sites,” Proceedings of the International Conference on Intelligent Sustainable Systems, 665, pp. 377-392, Springer, 2023.
[22] Y. Lee, J. Hwang, S. Lee, Y. Bae, and J. Park, “An energy and GPU-computation efficient backbone network for real-time object detection,” Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 752-760, 2019.
[23] L. Huang, W. Li, L. Shen, H. Fu, X. Xiao, and S. Xiao, “YOLOCS: Object detection based on dense channel compression for feature spatial solidification,” arXiv:2305.04170, 2023.
[24] S. McKee, M.F. Tomé, V.G. Ferreira, J.A. Cuminato, A. Castelo, F.S. Sousa, and N. Mangiavacchi, “The MAC method,” Computers and Fluids, 37(8), pp. 907-930, 2008.
[25] 行政院公報第026卷第112期衛生勞動篇, 2020, https://gazette.nat.gov.tw/EG_FileManager/eguploadpub/eg026112/ch08/type3/gov82/num34/Eg.pdf.
[26] Make Sense, https://www.makesense.ai.
[27] D.H. Hubel and T.N. Wiesel, “Receptive fields of single neurones in the cat’s striate cortex,” The Journal of Physiology, 148(3), pp. 574-591, 1959.
[28] K. Fukushima, “Neocognitron: A selforganizing neural network model for a mechanism of pattern recognition unaffected by shift in position,” Biological Cybernetics, 36(4), pp.193-202, 1980.
[29] A. Waibel, T. Hanazawa, G. Hinton, K. Shikano, and K.J. Lang, “Phoneme recognition using time-delay neural networks,” IEEE Transactions on Acoustics, Speech, and Signal Processing, 37(3), pp. 328-339, 1989.
[30] K. Chellapilla, S. Puri, and P. Simard, “High performance convolutional neural networks for document processing,” Proceedings of the 10th International Workshop on Frontiers in Handwriting Recognition. Los Alamitos, CA: IEEE Computer Society, 2006.
[31] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” Advances in Neural Information Processing Systems, 25, pp. 1097-1105, 2012.
[32] L. Cao, X. Zheng, and L. Fang, “The semantic segmentation of standing tree images based on the Yolo v7 deep learning algorithm,” Electronics, 12, 929, pp. 1-15, 2022.
[33] https://github.com/WongKinYiu/yolov7
[34] D. Zhou, J. Fang, X. Song, C. Guan, J. Yin, Y. Dai, and R. Yang, “IoU loss for 2D/3D object detection,” Proceedings of the International Conference on 3D Vision, pp. 85-94, 2019.
[35] A. Singh, “Selecting the right bounding box using non-max suppression (with implementation),” Analytics Vidhya, August, 2020.
[36] G. Jocher, A. Chaurasia, and J. Qiu, “YOLO by ultralytics,” https://github.com/ultralytics/ultralytics, Accessed: February 30, 2023.