簡易檢索 / 詳目顯示

研究生: 吳鈺瑄
Wu, Yu-Xuan
論文名稱: 籃球球員運動追蹤系統
A Movement Tracking System of Basketball Player
指導教授: 方瓊瑤
Fang, Chiung-Yao
口試委員: 方瓊瑤
Fang, Chiung-Yao
陳世旺
Chen, Sei-Wang
黃仲誼
Huang, Chung-I
羅安鈞
Luo, An-Chun
吳孟倫
Wu, Meng-Luen
口試日期: 2024/07/12
學位類別: 碩士
Master
系所名稱: 資訊工程學系
Department of Computer Science and Information Engineering
論文出版年: 2024
畢業學年度: 112
語文別: 中文
論文頁數: 45
中文關鍵詞: 籃球運動運動追蹤球員偵測球場偵測球員分隊背景去除
英文關鍵詞: basketball, movement tracking, player detection, court detection, team assignment, background removal
研究方法: 實驗設計法
DOI URL: http://doi.org/10.6345/NTNU202401846
論文種類: 學術論文
相關次數: 點閱:85下載:3
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 籃球在全球擁有廣泛的觀眾和參與者,隨著科技進步,數據分析技術在籃球比賽中的應用變得越來越普及。這些技術革新為教練和球員提供戰術支持,從而提升比賽表現。為進一步推動籃球運動的發展,本研究致力於開發一個籃球球員運動追蹤系統,以提供數據和分析,幫助教練和球員制定更有效的戰術策略。本系統以籃球比賽影片為輸入,通過一系列的處理步驟,最終在籃球場平面圖上展示球員的運動軌跡。主要的處理過程包括籃球球員偵測、球場偵測及座標轉換,以及球員運動軌跡追蹤。本系統透過YOLOv8 [Gle23]模組偵測籃球球員,Kalicalib [Mag22]技術偵測籃球球場、以及本研究提出的演算法進行球員分隊及追蹤。系統通過裁剪輸入影像中背景區域並獲取背景顏色,然後去除球員圖片的背景,並使用k-means聚類進行球員分隊,同時去除裁判的影響。這些技術確保系統在各種光照條件、比賽場地和球員服裝顏色對比度下進行球員分隊。本研究提出三項評估系統準確性的指標,分別為球員軌跡平均偏移量、球員移動方向的準確率、以及球員分隊的準確率。實驗結果顯示,系統的平均偏移量為2.79公尺,球員移動方向的準確率為70%,而球員分隊的準確率則為91%。

    Basketball enjoys a vast audience and participation worldwide. With technological advancements, the application of data analysis techniques in basketball games has become increasingly prevalent. These innovations provide tactical support for coaches and players, enhancing game performance. To further promote the development of basketball, this research aims to develop a basketball player movement tracking system that provides data and analysis to help coaches and players formulate more effective tactical strategies.This system takes basketball game videos as input and, through a series of processing steps, ultimately displays players' movement trajectories on a basketball court diagram. The main processing steps include basketball player detection, court detection and coordinate transformation, and player movement tracking. The system utilizes the YOLOv8 [Gle23] module for player detection, Kalicalib [Mag22] technology for court detection, and algorithms proposed in this research for team assignment and tracking. The system trims the background area of the input images, extracts background color, removes the background from player images, and uses k-means clustering for team assignment, while also eliminating the impact of referees. These techniques ensure accurate team assignment under various lighting conditions, game venues, and player uniform color contrasts.This study proposes three metrics to evaluate system accuracy: the average displacement of player trajectories, the accuracy of player movement direction, and the accuracy of team assignment. Experimental results show that the system's average displacement is 2.79 meters, the accuracy of player movement direction is 70%, and the accuracy of team assignment is 91%.

    摘要 i Abstract ii 致謝 iii 圖目錄 v 表目錄 vii 第1章 緒論 1 第1節 研究動機與目的 1 第2節 研究困難 2 第3節 研究貢獻 3 第4節 論文架構 3 第2章 文獻探討 4 第1節 籃球球員偵測 4 第2節 球場偵測及座標轉換 9 第3節 籃球球員運動軌跡追蹤 12 第3章 籃球球員運動追蹤系統 15 第1節 系統流程 15 第4章 實驗結果與討論 26 第1節 系統準確率 26 第2節 系統追蹤結果討論 29 第5章 結論與未來工作 40 第1節 結論 40 第2節 未來工作 40 參考文獻 42

    [Cha22] Y. S. Chao, W. C. Chen, J. W. Peng, and M. C. Hu, “Learning Robust Latent Space of Basketball Player Trajectories for Tactics Analysis,” Proceedings of 2022 IEEE International Conference on Multimedia and Expo (ICME), Taipei, Taiwan, 2022, pp. 1-6.
    [Che15] C. H. Chen, T. L. Liu, Y. S. Wang, H. K. Chu, N. C. Tang, and H. Y. M. Liao, “Spatio-Temporal Learning of Basketball Offensive Strategies,” Proceedings of the 23rd ACM international conference on Multimedia (MM '15), October 2015, pp. 1123-1126.
    [Gle23] “Ultralytics YOLOv8,” 2023, [Online]. Available: https://github.com/ ultralytics/ultralytics?tab=readme-ov-file
    [Mag22] A. Maglo, A. Orcesi, and Q. C. Pham, “KaliCalib: A Framework for Basketball Court Registration,” Proceedings of the 5th International ACM Workshop on Multimedia Content Analysis in Sports (MMSports '22), Lisbon, Portugal, pp. 1-6, October 2022.
    [Far04] D. Farin, S. Krabbe, P. H. N. de With, and W. Effelsberg, “Robust Camera Calibration for Sport Videos Using Court Models,” Storage and Retrieval Methods and Applications for Multimedia, vol. 5307, 2004, pp. 80-91.
    [Hu11] M. C. Hu, M. H. Chang, J. L. Wu, and L. Chi, “Robust Camera Calibration and Player Tracking in Broadcast Basketball Video,” IEEE Transactions on Multimedia, vol. 13, no. 2, 2011, pp. 266-279.
    [Far05] D. Farin, J. Han, and P. de With, “Fast camera calibration for the analysis of sport sequences,” Proceedings of the IEEE International Conference on Multimedia and Expo, Amsterdam, The Netherlands, July 2005, pp. 482-485.
    [Wen16] P. C. Wen, W. C. Cheng, Y. S. Wang, H. K. Chu, N. C. Tang, and H. Y. M. Liao, “Court Reconstruction for Camera Calibration in Broadcast Basketball Videos,” IEEE Transactions on Visualization and Computer Graphics, vol. 22, no. 5, May 2016, pp. 1517-1526.
    [Che20] L. Chen and W. Wang, “Analysis of Technical Features in Basketball Video Based on Deep Learning Algorithm,” Signal Processing: Image Communication, vol. 83, Apr. 2020, pp. 115-124.
    [Tsa21] T. Y. Tsai, Y. Y. Lin, S. K. Jeng, and H. Y. M. Liao, “End-to-End Key-Player-Based Group Activity Recognition Network Applied to Basketball Offensive Tactic Identification in Limited Data Scenarios,” IEEE Access, vol. 9, July 2021, pp. 104395-104404.
    [Mil17] A. C. Miller and L. Bornn, “Possession Sketches: Mapping NBA Strategies,” Proceedings of the Sports Analytics conference on MIT Sloan, March 2017, Hynes Convention Center.
    [Wor23] 世界籃球日, 2023, [Online]. Available: https://www.un.org/zh/ observances/world-basketball-day
    [Spo20] 全球最受歡迎運動,第一意料之內,第二竟然是…?, 2020, [Online]. Available: https://mag.sportsoho.com/全球最受歡迎運動
    [Ter24] J. R. Terven and D. M. C. Esparza, “A Comprehensive Review of YOLO Architectures in Computer Vision: From YOLOv1 to YOLOv8 and YOLO-NAS,” Machine Learning and Knowledge Extraction 5, no. 4, February 2024, pp. 1680-1716.
    [Cai18] Z. Cai and N. Vasconcelos, “Cascade R-CNN: Delving Into High Quality Object Detection,” Proceedings of 2018 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018.
    [Zhu21] X. Zhu, W. Su, L. Lu, B. Li, X. Wang, and J. Dai, “Deformable DETR: Deformable Transformers for End-to-End Object Detection,” Proceedings of 2021 International Conference on Learning Representations (ICLR), Jan 2021.
    [Car20] N. Carion, F. Massa, G. Synnaeve, N. Usunier, A. Kirillov, and S. Zagoruyko, “Deformable DETR: Deformable Transformers for End-to-End Object Detection,” Proceedings of 16th European Conference on Computer Vision (ECCV), August 2020.
    [Tai22] “最新的物件偵測王者 YOLOv7 介紹” 2022, [Online]. Available: https://aiacademy.tw/yolov7/
    [Red16] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You Only Look Once: Unified, Real-Time Object Detection,” Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, pp. 779-788, 2016.
    [Red17] J. Redmon and A. Farhadi, “YOLO9000: Better, Faster, Stronger,” Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, pp. 6517-6525, 2017.
    [Red18] J. Redmon and A. Farhadi, “YOLOv3: An Incremental Improvement,” 2018, [Online]. Available: https://doi.org/10.48550/arXiv.1804.02767
    [Boc20] A. Bochkovskiy, C. Wang, and H. Liao, “YOLOv4: Optimal Speed and Accuracy of Object Detection,” 2020, [Online]. Available: https://doi.org/ 10.48550/arXiv.2004.11934
    [Wan15] C. Wang, H. Mark Liao, Y. Wu, P. Chen, J. Hsieh, and I. Yeh, “CSPNet: A New Backbone that can Enhance Learning Capability of CNN,” Proceedings of 2020 Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Seattle, WA, USA, pp. 1571-1580, 2020.
    [Liu18] S. Liu, L. Qi, H. Qin, J. Shi, and J. Jia, “Path Aggregation Network for Instance Segmentation,” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), UT, USA, pp. 8759-8768, 2018.
    [Gle20] “Ultralytics YOLOv5” 2020, [Online]. Available: https://github.com/ ultralytics/yolov5
    [Li22] C. Li, L. Li, H. Jiang, K. Weng, Y. Geng, L. Li, Z. Ke, Q. Li, M. Cheng, W. Nie, Y. Li, B. Zhang, Y. Liang, L. Zhou, X. Xu, X. Chu, Xiaoming Wei, Xiaolin Wei, “YOLOv6: A Single-Stage Object Detection Framework for Industrial Applications,” 2022, [Online]. Available: https://doi.org/10.48550/arXiv.2209.02976
    [Wan22] C. Y. Wang, A. Bochkovskiy, and H. Y. M. Liao, “YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors,” 2022, [Online]. Available: https://doi.org/10.48550/arXiv.2207.02696
    [Hon24] [Object detection] YOLOv8詳解, 2024, [Online]. Available: https:// henry870603.medium.com/object-detection-yolov8詳解-fdf8874e5e99
    [Hey23] 【yolov8】与yolov5的区别及改进详解, 2023, [Online]. Available: https://blog.csdn.net/qq_44878985/article/details/134718475
    [Blo23] YOLO V5 和 YOLO V8 对比学习, 2023, [Online]. Available: https://blog.csdn.net/weixin_43708069/article/details/132602991
    [Luo19] [OpenCV實戰]18 Opencv中的單應性矩陣Homography, 2019, [Online]. Available: https://www.twblogs.net/a/5cb5e690bd9eee0eff45642b
    [Liu16] 相機模型(內參數, 外參數), 2016, [Online]. Available: https:// blog.csdn.net/liulina603/article/details/52953414
    [Mor22] Camera Calibration 相機校正, 2022, [Online]. Available: https:// medium.com/image-processing-and-ml-note/camera-calibration-相機校正-2632f302bcbd
    [Po18] 清晰說明針孔相機的內部參數與外部參數矩陣, 2022, [Online]. Available: https://blog.techbridge.cc/2018/04/22/intro-to-pinhole-camera-model/
    [Liu20] 二維模式相機校正 (2D Pattern Camera Calibration), 2020,[Online]. Available: https://tigercosmos.xyz/post/2020/04/cv/camera-calibration/
    [Luo24] R. Luo, Z. Song, L. Ma, J. Wei, W. Yang, and M. Yang, “DiffusionTrack: Diffusion Model for Multi-Object Tracking,” Proceedings of the AAAI Conference on Artificial Intelligence, vol. 38, March 2024.
    [Ran23] Brief summary of YOLOv8 model structure, 2023, [Online]. Available: https://github.com/ultralytics/ultralytics/issues/189
    [Zha22] Y. Zhang, P. Sun, Y. Jiang, D. Yu, F. Weng, Z. Yuan, P. Luo, W. Liu, and X. Wang, “ByteTrack: Multi-object Tracking by Associating Every Detection Box,” Proceedings of the 2022 European Conference on Computer Vision (ECCV) on Lecture Notes in Computer Science, vol. 13682, Springer, Cham, 2022.
    [Wei22] ByteTrack算法步骤详解和代码逐行解析, 2022, [Online].Available: https://blog.csdn.net/weixin_43731103/article/details/123665507
    [Liu23] Z. Liu, X. Wang, C. Wang, W. Liu, and X. Bai, “SparseTrack: Multi-Object Tracking by Performing Scene Decomposition based on Pseudo-Depth,” 2023, [Online]. Available: https://doi.org/10.48550/arXiv.2306.05238

    下載圖示
    QR CODE