簡易檢索 / 詳目顯示

研究生: 林厚廷
Lin, Hou-Ting
論文名稱: 基於攝影機的自由重量訓練追蹤
Camera-Based Tracking for Free Weight Training
指導教授: 李忠謀
Lee, Greg c.
口試委員: 李忠謀
Lee, Greg C.
柯佳伶
Koh, Jia-Ling
江政杰
Chiang, Cheng-Chieh
口試日期: 2024/01/25
學位類別: 碩士
Master
系所名稱: 資訊工程學系
Department of Computer Science and Information Engineering
論文出版年: 2024
畢業學年度: 112
語文別: 中文
論文頁數: 60
中文關鍵詞: 重量訓練訓練紀錄動作辨識自動追蹤人體姿態估計
英文關鍵詞: Weight training, Training record, Action recognition, Automatic tracking, Human pose estimation
研究方法: 實驗設計法
DOI URL: http://doi.org/10.6345/NTNU202400288
論文種類: 學術論文
相關次數: 點閱:178下載:2
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報

在運動中利用自我監控(Self-Monitoring)的機制,紀錄運動過程來量化運動成效,可以提供訓練者反饋同時增強訓練者對運動效果的信心。而重量訓練(Weight Training )是一種抵抗自身外部重量的阻力訓練,需要根據自身需求瞭解訓練目標,規劃訓練內容並執行。因此,在訓練過程中紀錄下訓練動作、重量、次數、組數和訓練/休息時間五項關鍵資訊,可以幫助訓練者評估訓練品質、衡量進步幅度以及追蹤長期訓練計畫。
本研究利用電腦視覺技術,提出非接觸式的重量訓練追蹤方法。透過攝影機拍攝訓練者與訓練設備,將影像利用人體姿態估計結合物件偵測與影像分割技術,獲取人體動作與訓練設備的基礎資訊。接著,配合動作辨識模型,根據訓練者實際的自由重量訓練模式,自動追蹤動作、次數、組數、重量與訓練/休息時間五項重量訓練關鍵資訊。
本研究共收集 17 位訓練者分別執行三個自由重量訓練動作的實際訓練影像,並由三個視角同時拍攝,實驗資料集共 153 部影片。針對追蹤方法進行驗證評估,內容包括五項紀錄項目。實驗結果顯示,在完整拍攝人體動作與訓練設備的多視角攝影條件下, 本研究提出的方法能準確追蹤 17 位訓練者於不同視角的訓練動作與執行組數,平均準確率可達 100% ; 此外,次數追蹤於各視角之平均F1-Score可達 0.98 ; 重量追蹤則於不同視角之準確率達 96% ; 訓練/休息時間追蹤能在 8 秒誤差容忍情況下,平均準確率達 100%, 2-6 秒誤差容忍情況下,各視角平均準確率為 93% 。綜合以上實驗結果支持所提出追蹤方法,能有效追蹤五項重量訓練內容並記錄。

In exercise, utilizing the mechanism of self-monitoring to record the exercise process can help quantify exercise efficacy and provide feedback to enhance trainers' confidence in the effects of exercise. Weight training is a form of resistance training that requires understanding training objectives, planning training content, and execution based on individual needs. Therefore, recording five key pieces of information - training movements, weight, repetitions, sets, and training/resting times – during the training process can assist trainers in evaluating training quality, gauging progress, and tracking long-term training plans.
This study proposes a non-contact weight training tracking method using computer vision techniques. By filming trainers and training equipment with cameras, and utilizing human pose estimation combined with object detection and image segmentation techniques, fundamental information about human movements and training equipment is obtained from the footage. Subsequently, by incorporating action recognition models and based on actual free weight training patterns of trainers, five key pieces of weight training information – movements, repetitions, sets, weight, and training/resting times – are automatically tracked.
This study collected actual training footage of 17 trainers performing three free weight training movements, filmed simultaneously from three angles, totaling 153 videos in the experimental dataset. The tracking methods were validated and evaluated for the five recorded items. The results showed that under the condition of complete multi-angle footage capturing human movements and training equipment, the proposed method can accurately track the training movements and sets of 17 trainers from different angles, with an average accuracy up to 100%; additionally, the average F1-Score for tracking repetitions can reach over 0.98; the accuracy for tracking weight can reach over 96% from different angles; tracking of training/resting times can achieve 100% accuracy under a tolerance of 8 seconds in timing error, and an average accuracy of 93% under 2-6 seconds of tolerance. In summary, the experimental results support the proposed tracking method for effectively tracking and recording five key aspects of weight training.

第一章 緒論 1 1.1 研究動機 1 1.2 研究目的 2 1.3 名詞操作型定義 2 1.4 論文架構 3 第二章 文獻探討 4 2.1 自由重量三大訓練動作 4 2.2 自由重量訓練追蹤方法 6 2.2.1 感測器 6 2.2.2 電腦視覺 8 2.3 人體姿態估計方法(Human Pose Estimation) 10 2.3.1 基於人體骨架的動作辨識方法(Skeleton-based action recognition) 11 2.4 物件偵測(Object Detection) 13 2.5 實例分割(Instance Segmentation) 14 第三章 方法與步驟 17 3.1 研究架構 17 3.2 動作辨識 18 3.2.1 人體關鍵點偵測 18 3.2.2 資料集 19 3.2.3 特徵工程 20 3.2.4 模型訓練與推理 24 3.3 槓鈴與槓片追蹤 25 3.3.1 槓鈴偵測 25 3.3.1.2 模型訓練與推理 26 3.3.2 槓片分割 28 3.4 槓鈴握持狀態與起槓狀態偵測 30 3.4.1 槓鈴握持狀態偵測 30 3.4.2 槓鈴起槓狀態偵測 31 3.5 組數偵測器(SetDetector) 32 3.6 次數偵測器(RepDetector) 33 3.7 多視角集成方法 33 第四章 實驗結果與討論 35 4.1 實驗設置 35 4.1.1 實驗環境與設備 35 4.1.2 實驗資料集 36 4.1.3 實驗評估指標 38 4.2 實驗一 : 訓練動作、組數與次數評估 39 4.3 實驗二 : 重量評估 42 4.4 實驗三 : 訓練/休息時間評估 45 4.5 實驗結果討論 46 第五章 追蹤流程 47 5.1 追蹤項目 47 5.1.1 組數 47 5.1.2 次數 47 5.1.3 動作 49 5.1.4 訓練/休息時間 50 5.1.5 重量 50 5.2 追蹤系統展示 52 第六章 結論與未來展望 54 6.1 結論 54 6.2 未來展望 54 參考文獻 55

A. Altmann, et al., “Permutation importance: a corrected feature importance measure,” Bioinformatics, vol. 26, no. 10, pp. 1340-1347, 2010.

A. Paoli and A. Bianco, “What Is Fitness Training? Definitions and Implications: A Systematic Review Article,” Iranian journal of public health, vol. 44, no. 5, pp. 602, 2015.

A. Vaswani, et al., "Attention is all you need," Advances in neural information processing systems, vol. 30, 2017.

A. Bochkovskiy, C.-Y. Wang and H.-Y. M. Liao, “YOLOv4: Optimal speed and accuracy of object detection,” arXiv preprint arXiv:2004.10934, 2020.

B. Ferreira, et al., “Deep learning approaches for workout repetition counting and validation,” Pattern Recognition Letters, 2021.

B. L. Breiman, “Random forests,” Machine learning, vol. 45, no. 1, pp. 5-32, 2001.

C. Crema, et al., “IMU-based solution for automatic detection and classification of exercises in the fitness scenario,” 2017 IEEE Sensors Applications Symposium (SAS), Glassboro, NJ, USA, pp. 1-6, 2017. doi: 10.1109/SAS.2017.7894068

C. Seeger, A. Buchmann and K. Van Laerhoven, “myHealthAssistant: A phone-based body sensor network that captures the wearer’s exercises throughout the day,” Proc. 6th Int. Conf. Body Area Netw. (BodyNets), pp. 1-7, 2011.

C. Plizzari, M. Cannici and M. Matteucci, "Skeleton-based action recognition via spatial and temporal transformer networks," Computer Vision and Image Understand-ing, vol. 208, p. 103219, 2021.

C.-Y. Wang, A. Bochkovskiy and H.-Y. M. Liao, “YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors,” arXiv preprint arXiv:2207.02696, 2022.

D. Bolya, et al., “YOLACT: Real-time instance segmentation,” Proceedings of the IEEE/CVF international conference on computer vision, 2019.

D. Morris, et al., “Recofit: Using a wearable sensor to find recognize and count repetitive exercises,” Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 3225-3234, 2014.

E. Xie, et al., “Polarmask: Single shot instance segmentation with polar representation,” Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2020.

F. Ding, et al., “TTBA: An RFID-based tracking system for two basic actions in free-weight exercises,” ACM Q2SWinet, pp. 7-14, 2018.

G. Ross, et al., "Rich feature hierarchies for accurate object detection and seman-tic segmentation," Proceedings of the IEEE conference on computer vision and pat-tern recognition, 2014.

G. Ross, "Fast r-cnn," Proceedings of the IEEE international conference on com-puter vision, 2015.

G. Wenchao, S. Bai and L. Kong, "A review on 2D instance segmentation based on deep neural networks," Image and Vision Computing, vol. 120, p. 104401, 2022.

H. Ding, et al., “FEMO: A Platform for Free-Weight Exercise Monitoring with RFIDs,” Proc. ACM SenSys, pp. 141-54, 2015.

H. Fang, et al., “RMPE: Regional multi-person pose estimation,” in Proc. IEEE Int. Conf. Comput. Vis., pp. 2353–2362, 2017

H. Li, et al., “InFit: Combination Movement Recognition For Intensive Fitness Assistant Via Wi-Fi,” IEEE Transactions on Mobile Computing, 2022. doi: 10.1109/TMC.2022.3209656.

H. Xu, et al., “Ghum & ghuml: Generative 3d human shape and articulated pose models,” Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020.

H. Ying, et al., “AICoacher: A System Framework for Online Realtime Workout Coach,” Proceedings of the 29th ACM International Conference on Multimedia, 2021.

J. E. Layne and M. E. Nelson, “The effects of progressive resistance training on bone density: a review,” Medicine & Science in Sports & Exercise, vol. 31, no. 1, pp. 25–30, 1999. doi:10.1097/00005768-199901000-00006

J. Liu, et al., "Spatio-temporal lstm with trust gates for 3d human action recogni-tion," Lecture Notes in Computer Science (including subseries Lecture Notes in Arti-ficial Intelligence and Lecture Notes in Bioinformatics), vol. 9912 LNCS, 2016.

J. Redmon, et al., "You only look once: Unified, real-time object detection," Pro-ceedings of the IEEE conference on computer vision and pattern recognition, 2016.

K. Sun, et al., "Deep high-resolution representation learning for human pose es-timation," Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019.

K. He, et al., “Mask R-CNN,” Proceedings of the IEEE international conference on computer vision, 2017.

M. Mahavier, W. T. "A gentle discovery method (The modified Texas method)," College Teaching, 45(4), 132-135, 1997.

M. Merenda, et al., “A Novel Fitness Tracker Using Edge Machine Learning,” 2020 IEEE 20th Mediterranean Electrotechnical Conference (MELECON), Palermo, Italy, pp. 212-215, 2020.

M. M. Gharasuie, N. Jennings and S. Jain, “Performance monitoring for exercise movements using mobile cameras” in Proceedings of the Workshop on Body-Centric Computing Systems, BodySys’21, (New York, NY, USA), p. 1–6, 2021.

M. Radhakrishnan, et al., “Gym usage behavior & desired digital interventions: An empirical study,” Proceedings of the 14th EAI International Conference on Perva-sive Computing Technologies for Healthcare, 2020.

N. Gao, et al., “SSAP: Single-shot instance segmentation with affinity pyramid,” Proceedings of the IEEE/CVF international conference on computer vision, 2019.

P. Sutphin, “Powerlifting: The TOTAL Package,” AuthorHouse, 2014.

R. Khurana, et al., “Beyond Steps: Challenges and Opportunities in Fitness Tracking,” In Woodstock ’18: ACM Symposium on Neural Gaze Detection, June 03–05, 2018.

R. Khurana, et al., “GymCam: Detecting, recognizing and tracking simultaneous exercises in unconstrained scenes,” Proc. ACM Interact., Mobile, Wearable Ubiqui-tous Technol., vol. 2, no. 4, pp. 1–17, Dec. 2018.

R. Nelson and S. Hayes, “Theoretical explanations for reactivity in self-monitoring,” Behavior Modification, vol. 5, no. 1, pp. 3–14, 1981.

S. Fleck and W. Kraemer, “Designing resistance training programs,” 4E. Human Kinetics, 2014.

S. Ren, et al., "Faster r-cnn: Towards real-time object detection with region pro-posal networks," Advances in neural information processing systems, vol. 28, 2015.

S. Yan, Y. Xiong and D. Lin, "Spatial temporal graph convolutional networks for skeleton-based action recognition," Proceedings of the AAAI conference on artificial intelligence, vol. 32, no. 1, 2018.

T. Chen and C. Guestrin, "Xgboost: A scalable tree boosting system," Proceed-ings of the ACM SIGKDD international conference on knowledge discovery and data mining, 2016.

T. Ho, “Random decision forests,” Proceedings of 3rd International Conference on Document Analysis and Recognition, IEEE, vol. 1, pp. 278-282, 1995.

Ultralytics, YOLOv8, https://docs.ultralytics.com/, 2023.

V. Bazarevsky, et al., “BlazePose: On-device real-time body pose tracking,” Proc. CVPR Workshop Comput. Vis. Augmented Virtual Reality, pp. 1-4, 2020.

V. Bazarevsky, et al., “BlazeFace: Sub-millisecond neural face detection on mobile GPUs,” arXiv preprint arXiv:1907.05047, 2019.

W. Liu, et al., "Ssd: Single shot multibox detector," Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 9905 LNCS, pp. 21–37, 2016.

W. Westcott, “Resistance Training is Medicine: Effects of Strength Training on Health,” Current Sports Medicine Reports, vol. 11, no. 4, pp. 209–216, 2012. doi:10.1249/JSR.0b013e31825dabb8

X. Wang, et al., “SOLO: Segmenting objects by locations,” Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XVIII 16. Springer International Publishing, 2020.

Y. Zhu, et al., “FitAssist: Virtual fitness assistant based on WiFi,” in Proc. 16th EAI Int. Conf. Mobile Ubiquitous Syst., pp. 328–337, 2019.

Z. Cao, et al., “OpenPose: realtime multi-person 2D pose estimation using part affinity fields,” arXiv preprint arXiv:1812.08008, 2018.

Z. Cai and N. Vasconcelos, “Cascade R-CNN: delving into high quality object detection,” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018.

Derrik Willemsen, “Barbell squat,” Men'sPower Publishing, 2019.
https://menspower.nl/barbell-bench-press/

Derrik Willemsen, “Barbell bench press,” Men'sPower Publishing, 2019. https://menspower.nl/barbell-bench-press/

Debbie Luna, “Is Sumo Deadlift Easier? For Some, Yes,” Inspire US, 2023.
https://www.inspireusafoundation.org/is-sumo-deadlift-easier/

Davis David, “Random Forest Classifier Tutorial: How to Use Tree-Based Algo-rithms for Machine Learning,” freeCodeCamp, 2020.
https://www.freecodecamp.org/news/how-to-use-the-tree-based-algorithm-for-machine-learning/

Knowledge Transfer edit, “PyTorch K-Fold Cross-Validation using Dataloader and Sklearn,” Knowledge Transfer, 2021.
https://androidkt.com/pytorch-k-fold-cross-validation-using-dataloader-and-sklearn/

RangeKing, “Brief summary of YOLOv8 model structure,”github, 2023. https://github.com/ultralytics/ultralytics/issues/189

下載圖示
QR CODE