研究生: |
石展兢 Shih, Chan-Ching |
---|---|
論文名稱: |
視覺式智慧型高爾夫揮桿動作姿勢分析系統 A Vision-Based Intelligent Golf Swing Posture Analysis System |
指導教授: |
方瓊瑤
Fang, Chiung-Yao |
口試委員: |
方瓊瑤
Fang, Chiung-Yao 陳世旺 Chen, Sei-Wang 黃仲誼 Huang, Chung-I 羅安鈞 Luo, An-Chun 許之凡 Hsu, Chih-Fan |
口試日期: | 2022/06/30 |
學位類別: |
碩士 Master |
系所名稱: |
資訊工程學系 Department of Computer Science and Information Engineering |
論文出版年: | 2022 |
畢業學年度: | 110 |
語文別: | 中文 |
論文頁數: | 74 |
中文關鍵詞: | 高爾夫運動 、高爾夫揮桿姿勢 、運動科技 、輕量級神經網路 、循環神經網路 、三維人體模型 、深度學習 |
英文關鍵詞: | Golf, Golf swing, Sports technology, Lightweight neural network, Recurrent neural network, 3D human pose and shape, Deep learning |
研究方法: | 實驗設計法 、 參與觀察法 、 比較研究 、 內容分析法 |
DOI URL: | http://doi.org/10.6345/NTNU202201007 |
論文種類: | 學術論文 |
相關次數: | 點閱:145 下載:35 |
分享至: |
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
全球參與高爾夫這項運動的人口數量正在逐步上升,根據世界高爾夫管理機構皇家古老高爾夫俱樂部(The R&A)公布2021年的全世界高爾夫球人數為6,660萬人,超越了2012年的6,160萬人來到歷史高點,可見高爾夫球己經成為全世界普及的運動。近年來運動科技興起,將運動與科技兩者相互結合,利用智慧化訓練能夠有效幫助運動員提升訓練品質並降低運動傷害發生。本研究以高爾夫運動為基礎,為避免高爾夫揮桿姿勢錯誤導致運動傷害,因此開發出一套視覺式智慧型高爾夫揮桿動作姿勢分析系統,讓使用者能夠隨時隨地將自身和教練兩者的高爾夫揮桿姿勢相互比較,可達到自行修正高爾夫揮桿姿勢之目的。
視覺式智慧型高爾夫揮桿動作姿勢分析系統輸入使用者之高爾夫揮桿影片以及教練之高爾夫揮桿影片進行高爾夫揮桿姿勢比對分析。本系統主要分為兩大步驟:高爾夫揮桿分解動作擷取以及三維人體模型姿勢比對分析。在第一步驟中,本研究使用輕量級網路ShuffleNetV2和循環神經網路Bi-GRU進行改良後擷取出使用者以及教練兩者的高爾夫揮桿八個分解動作。在第二步驟中,利用擷取出使用者以及教練兩者的高爾夫揮桿八個分解動作分別建構出可以表現出豐富人體資訊的三維人體模型,接著使用三維人體模型進行使用者以及教練的高爾夫揮桿姿勢比對分析。
本研究將高爾夫揮桿動作拆解成八個分解動作,依序是擊球準備(address)、起桿(toe-up)、上桿(mid-backswing)、上桿頂點(top)、下桿(mid-downswing)、擊球(impact)、送桿(mid-follow-through)以及收桿(finish)。本研究使用GolfDB資料集[Mcn19]所蒐集的高爾夫揮桿影片進行訓練及測試,實驗結果顯示高爾夫揮桿分解動作擷取之準確率為86.15%。另外,本研究採用之三維人體模型是由6,890個節點所組成的人體網格,該模型將人體分解成24個身體部位,實驗時利用該模型之擬真人體特性能夠更精準地判斷使用者及教練之高爾夫揮桿姿勢差異。如上所述,本研究所提出之視覺式智慧型高爾夫揮桿動作姿勢分析系統具有效性。
Global participation in golf is gradually increasing. The world golf management organization, has already announced that the number of golfers in the world is 66.6 million in 2021, which exceeds the number of 61.6 million in 2012 to all-time high. In recent years, with the rise of sports technology, and the use of intelligent techniques can effectively help athletes improve training quality and reduce sports injuries. This study proposes a vision-based intelligent golf swing posture analysis system allowing the user to compare the golf swing of himself and the coach with each other anytime and anywhere. The proposed system can achieve the purpose of correcting the golf swing by the user and avoid sports injury caused by wrong swing posture.
The input of the proposed a vision-based intelligent golf swing posture analysis system is one user's golf swing video and one coach's golf swing video. The system is mainly provided with two stages: golf swing decomposition action extraction and golf swing action comparison analysis. In the first stage, this study modified a lightweight neural network (ShuffleNetV2) and a recurrent neural network (bidirectional GRU) to extract the eight decomposition actions of golf swings of both the user and the coach. In the second stage, the system uses the eight decomposition actions of the golf swing of both the user and the coach to construct 3D human models, respectively. The 3D human models are used to compare and analyze the golf swing of the user and the coach.
This study decomposes the golf swing into eight decomposed actions which in the order of address, toe-up, mid-backswing, top, mid-downswing, impact, mid-follow-through and finish. This study uses golf swing videos collected by GolfDB dataset for training and testing and the experimental results show that the accuracy of golf swing determination action capture is 86.15%. In addition, the 3D human model used in this study is composed of 6,890 vertices. The above model decomposes the human body into 24 body parts. In the experiment, the characteristics of the 3D model can be used to more accurately judge the difference in golf swing posture of users and coaches. In conclusion, the vision-based intelligent golf swing posture analysis system proposed in this study is effective and robust.
[Mal95] W. J. Mallon and A. J. Colosimo, “Acromioclavicular Joint Injury in Competitive Golfers,” Journal of the Southern Orthopaedic Association, pp. 227-82, 1995.
[The98] G. Thériault and P. Lachance, “Golf Injuries. An overview,” Sports Med, pp. 43-57, 1998.
[Mch07] A. McHardy, H. Pollard, and K. Lou, “The Epidemiology of Golfrelated Injuries in Australian Amateur Golfers - A Multivariate Analysis,” South African Journal of Sports Medicine, pp. 12-19, 2007.
[Cho12] P. Chotimanus, N. Cooharojananone, and S. Phimoltares, “Real Swing Extraction for Video Indexing in Golf Practice Video,” Proceedings of Computing, Communications and Applications Conference, Hong Kong, China, pp. 420-425, 2012.
[Noi13] S. Noiumkar and S. Tirakoat, “Use of Optical Motion Capture in Sports Science: A Case Study of Golf Swing,”2013 International Conference on Informatics and Creative Multimedia, Hong Kong, China, pp. 310-313, 2013.
[Mcn19] W. McNally, K. Vats, T. Pinto, C. Dulhanty, J. McPhee, and A. Wong, “GolfDB: A Video Database for Golf Swing Sequencing,” Proceedings of 2019 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Long Beach, CA, pp. 2553-2562, 2019.
[How17] A. G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, and H. Adam, “MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications,” arXiv preprint arXiv: 1704.04861, 2017.
[How19] A. Howard, M. Sandler, G. Chu, L. C. Chen, B. Chen, M. Tan, W. Wang, Y. Zhu, R. Pang, V. Vasudevan, Q. V. Le, and H. Adam, “Searching for MobileNetV3,” Proceedings of International Comference on Computer Vision (ICCV), Seoul, Korea, pp. 1314-1324, 2019.
[Hu18] J. Hu, L. Shen, and G. Sun, “Squeeze-and-Excitation Networks,” Proceedings of 2018 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, pp. 7132-7141, 2018.
[Zha18] X. Zhang, X. Zhou, M. Lin, and J. Sun, “ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices,” Proceedings of 2018 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, pp. 6848-6856, 2018.
[Ma18] N. Ma, X. Zhang, H. T. Zheng, and J. Sun, “ShuffleNetV2: Practical Guidelines for Efficient CNN Architecture Design,” Proceedings of the European Conference on Computer Vision (ECCV), pp. 116-131, 2018.
[San18] M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L. -C. Chen, “MobileNetV2: Inverted Residuals and Linear Bottlenecks,” Proceedings of 2018 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, pp. 4510-4520, 2018.
[Ko21] K. R. Ko and S. B. Pan, “CNN and bi-LSTM based 3D Golf Swing Analysis by Frontal Swing Sequence Images,” Multimedia Tools and Applications, pp. 8957-8972, 2021.
[Lop15] M. Loper, N. Mahmood, J. Romero, G. Pons-Moll, and M. J. Black, “SMPL: A Skinned Multi-Person Linear Model,” Association for Computing Machinery (ACM), pp. 248:1-248:16, 2015.
[Kan18] A. Kanazawa, M. J. Black, D. W. Jacobs, and J. Malik, “End-to-end Recovery of Human Shape and Pose,” Proceedings of 2018 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, pp. 7122-7131,2018.
[Koc20] M. Kocabas, N. Athanasiou, and M. J. Black, “VIBE: Video Inference for Human Body Pose and Shape Estimation,” Proceedings of 2020 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, pp. 5252-5262, 2020.
[Mah19] N. Mahmood, N. Ghorbani, N. F. Troje, G. Pons-Moll, and M. Black, “AMASS: Archive of Motion Capture as Surface Shapes,” Proceedings of 2019 IEEE International Conference on Computer Vision (ICCV), Seoul, Korea, pp. 5441-5450,2019.
[Cho21] H. Choi, G. Moon, J. Y. Chang, and K. M. Lee, “Beyond Static Features for Temporally Consistent 3D Human Pose and Shape from a Video,” Proceedings of 2021 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, pp. 1964-1973, 2021.
[Zha21] H. Zhang, Y. Tian, X. Zhou, W. Ouyang, Y. Liu, L. Wang, and Z. Sun, “PyMAF: 3D Human Pose and Shape Regression with Pyramidal Mesh Alignment Feedback Loop,” Proceedings of 2021 IEEE International Conference on Computer Vision (ICCV), Quebec, Canada, pp. 11446-11456, 2021.
[Bog16] F. Bogo, A. Kanazawa, C. Lassner, P. Gehler, J. Romero, and M. J. Black, “Keep It SMPL: Automatic Estimation of 3D Human Pose and Shape from a Single Image,” Proceedings of 2016 European Conference on Computer Vision (ECCV), pp. 561-578, 2016.
[Pis16] L. Pishchulin, E. Insafutdinov, S. Tang, B. Andres, M. Andriluka, P. Gehler, and B. Schiele, “Deepcut: Joint Subset Partition and Labeling for Multi Person Pose Estimation,” Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, pp. 4929-4937, 2016.
[Omr18] M. Omran, C. Lassner, G. Pons-Moll, P. V. Gehler, and B. Schiele, “Neural Body Fitting: Unifying Deep Learning and Model-Based Human Pose and Shape Estimation,” Proceedings of 2018 International Conference on 3D Vision (3DV), pp. 484-494, 2018.
[Kol21] N. Kolotouros, G. Pavlakos, D. Jayaraman, and K. Daniilidis, “Probabilistic Modeling for Human Mesh Recovery,” Proceedings of 2021 IEEE International Conference on Computer Vision (ICCV), Montreal, QC, pp. 11605-11614, 2021.
[Fie21] M. Fieraru, M. Zanfir, S. C. Pirlea, V. Olaru, and C. Sminchisescu, “AIFit: Automatic 3D Human-Interpretable Feedback Models for Fitness Training,” Proceedings of 2021 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, pp. 9914-9923, 2021.
[Xie19] H. Xie, A. Watatani, and K. Miyata, “Visual Feedback for Core Training with 3D Human Shape and Pose,” 2019 Nicograph International (NicoInt), pp. 49-56, 2019.
[Cao17] Z. Cao, T. Simon, S. Wei, and Y. Sheikh, “Realtime Multi-Person 2D Pose Estimation Using Part Affinity Fields,” Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, pp. 7291-7299, 2017.
[Pre16] L. L. Presti and M. L. Cascia, “3D Skeleton-Based Human Action Classification: A Survey,” Pattern Recognition, pp. 130–147, 2016.
[Rez15] D. J. Rezende and S. Mohamed, “Variational Inference with Normalizing Flows,” Proceedings of the 32nd International Conference on Machine Learning (ICML), pp. 1530-1538, 2015.
[Sin14] N. Singla, “Motion Detection Based on Frame Difference Method,” International Journal of Information & Computation Technology, pp. 1559-1565, 2014.
[Iof15] S. Ioffe and C. Szegedy, “Batch normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift,” Proceedings of the 32nd International Conference on Machine Learning (ICML), pp. 448-456, 2015.
[Wu18] Y. Wu and K. He, “Group Normalization,” Proceedings of the European Conference on Computer Vision (ECCV), pp. 3-19, 2018.
[Ram17] P. Ramachandran, B. Zoph, and Q. V. Le, “Searching for Activation Functions,” arXiv preprint arXiv: 1710.05941, 2018.
[Wan20] Q. Wang, B. Wu, P. Zhu, P. Li, W. Zuo, and Q. Hu, “ECA-Net: Efficient Channel Attention for Deep Convolutional Neural Networks,” Proceedings of 2020 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, pp. 11531-11539, 2020.
[1] Top 10 Most Popular Sports In The Word: https://sportsshow.net/top-10-most-popular-sports-in-the- 2021年。
[2] International Golf Federation National Members: https://www.igfgolf.org/about-igf/nationalmembers/。
[3] The Royal & Ancient Golf Club of St Andrews Record Numbers Now Playing Golf Worldwide: https://api.randa.org/en/news/2021/12/record-numbers-now-playing-golf-worldwide。
[4] 科技戰已成常態,運動員背後的「神隊友們」 : https://www.inside.com.tw/feature/digi-plus/25004-digi-plus-science-sport 2021年。
[5] 運動科技導入,智能高爾夫增加擊球人口 : http://www.taiwangca.org.tw/news/data.php?id=1145 2021年。
[6] 孫藝珍、玄彬、全智賢都愛打「高爾夫球」,輕輕一揮竿全身瘦: https://www.womenshealthmag.com/tw/fitness/work-outs/g35690208/golf-
2021年。
[7] 揮桿姿勢多重要?一三高爾夫創辦人:「 1 年錯誤要花 3 年修正!」 : https://news.ebc.net.tw/news/living/159443 2019年。
[8] 揮桿技術及原理 (Swing Fundamentals & Skills):http://www.garoc.org/images/referee_coach/201910181834563702.pdf。
[9] Reading: ShuffleNet V2 — Practical Guidelines for Efficient CNN Architecture Design: https://sh-tsang.medium.com/reading-shufflenet-v2-practical-guidelines-for-e-fficient-cnn-architecture-design-image-287b05abc08a。
[10] 誰才是輕量級 CNN的王者? 7個維度全面評測 mobilenet/shufflenet/ghostnet: https://www.bilibili.com/read/cv8801259。
[11] Humans Process Visual Data Better:https://www.t-sciences.com/news/humans-process-visual-data-better