研究生: |
李聿宸 |
---|---|
論文名稱: |
基於CNN對於多人環境進行人臉辨識之研究 Research on multi-person environment face recognition based on CNN |
指導教授: | 李忠謀 |
學位類別: |
碩士 Master |
系所名稱: |
資訊工程學系 Department of Computer Science and Information Engineering |
論文出版年: | 2020 |
畢業學年度: | 108 |
語文別: | 中文 |
論文頁數: | 34 |
中文關鍵詞: | 人臉辨識 、深度學習 、卷積神經網路 |
英文關鍵詞: | face recognition, deep learning, convolutional neural network |
DOI URL: | http://doi.org/10.6345/NTNU202001346 |
論文種類: | 學術論文 |
相關次數: | 點閱:315 下載:51 |
分享至: |
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
人臉辨識於現今社會為熱門的議題,每個人皆有獨一的臉部特徵,相較於密碼或是個人證件等傳統的識別方式,人臉辨識既不需要隨時攜帶實體證件也不用擔心忘記密碼。當經由辨識而取得臉部影像後,就能夠藉由不同的臉部特徵與人臉資料庫進行比對來驗證身分。
本研究以設置於教室上方的攝影機拍攝課堂環境,取得之臉部影像解析度較低,因此人臉特徵較不突出,且亦有光線亮度不均勻以及臉部偏移等問題,導致傳統人臉辨識效果不佳。本研究運用YOLOv3結合深度學習的人臉偵測技術取得個人的臉部影像,並搭配卷積神經網路 (Convolutional Neural Network)訓練合適的模型進行人臉辨識,對於20 × 20以上之低解析度且包含不同角度的臉部影像,皆能達到97%以上的辨識準確率。由於人臉長時間下來會有些許的變化,根據實驗結果,經由四個月後之臉部影像仍能維持94%的辨識準確率。
Face recognition has become an important research topic. Compared with more traditional identification methods, such as passwords or personal ID card, automatic face recognition does not require physical ID cards nor need to worry about forgetting passwords. After obtaining the facial image, it is possible to verify the identity by comparing each person's different facial features with the face database.
In this study, cameras are placed above the classroom for capturing the classroom setting. Therefore, the resolution of facial images was relatively low, which resulted in less obvious facial features, and faced problems such as brightness and face angle at the same time. This study uses YOLOv3 combined with deep learning face detection method to obtain each person's face image, and use the convolutional neural network (CNN) to train a suitable face recognition model. For low-resolution facial images with a resolution as low as 20×20 and at different angles, the trained model can achieve an accuracy of 97%. From experimental results, the trained model can still maintain a recognition rate of 94%, even with faces taken four months after the initial training faces.
[1] Devadethan, S., Titus, G., and Purushothaman, “Face detection and facial feature extraction based on a fusion of knowledge based method and morphological image processing,” in IEEE Annual International Conference on Emerging Research Areas: Magnetics Machines and Drives (AICERA/iCMMD), pp. 1-5, 2014.
[2] Ebrahimpour, R., Sadeghnejad, N., Amiri, A., and Moshtagh, A., “ Low resolution face recognition using combination of diverse classifiers,” in Proceedings of the 2010 International Conference of Soft Computing and Pattern Recognition, SoCPaR 2010, pp. 265-268, 2010.
[3] Fisher, R. A., “The use of multiple measurements in taxonomic problems,” Annals of Eugenics, vol. 7, pp. 179-188, 1936.
[4] Gardner, M. W., and Dorling, S. R., “Artificial neural networks (the multilayer perceptron) - a review of applications in the atmospheric sciences,” Atmos. Environ., vol. 32, no. 14-15, pp. 2627-2636, 1998.
[5] He, K., Zhang, X., Ren, S., and Sun, J., “Deep residual learning for image recognition,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 770-778, 2016.
[6] Hoiem, D., Efros, A. A., and Hebert, M., “Putting objects in perspective,” Int. J. Comput. Vis., vol. 80, no. 1, pp. 3-15, 2008.
[7] Jagtap, A. M., Kangale, V., Unune, K., and Gosavi, P., “A study of LBPH, eigenface, fisherface and haar-like features for face recognition using OpenCV,” in Proceedings of the International Conference on Intelligent Sustainable Systems, ICISS 2019, pp. 219-224, 2019.
[8] Kim, J., Yun, S. , Kang, B. N., Kim, D., and Choi, J., “Reliable multi-person identification using DCNN-based face recognition algorithm and scale-ratio method, ” in 2017 14th International Conference on Ubiquitous Robots and Ambient Intelligence, URAI 2017, pp. 97-101, 2017.
[9] Krizhevsky, A., Sutskever, I., and Hinton, G. E., “ImageNet Classification with Deep Convolutional Neural Networks, ” in Proc. Neural Information and Processing Systems, pp. 1097-1105, 2012.
[10] Lin, Z. H., and Li, Y. Z., “Design and Implementation of Classroom Attendance System Based on Video Face Recognition, ” in Proceedings - 2019 International Conference on Intelligent Transportation, Big Data and Smart City, ICITBS 2019, pp. 385-388, 2019.
[11] Liu, J., Deng, Y., Bai, T., Wei, Z., and Huang, C., “Targeting Ultimate Accuracy: Face Recognition via Deep Embedding,” Comput. Res. Repository, 2015.
[12] Ojala, T., Pietikäinen, M., and Harwood, D., “A comparative study of texture measures with classification based on feature distributions,” Pattern Recognit., vol. 29, no. 1, pp. 51-59, 1996.
[13] Paul, S. K., Uddin, M. S., and Bouakaz, S., “Extraction of Facial Feature Points Using Cumulative Histogram,” IJCSI International Journal of Computer Science Issues, vol. 9, no. 1, 2012.
[14] Rajpathak, T., Kumar, R., and Schwartz, E., “Eye Detection Using Morphological and Color Image Processing, ” in Proc. Florida Conf. Recent Adv. Robot, pp. 1-6, 2019.
[15] Redmon, J., Divvala, S., Girshick, R., and Farhadi, A., “You only look once: Unified, real-time object detection,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 779-788, 2016.
[16] Redmon, J., and Farhadi, A., “YOLOv3: An Incremental Improvement,” IEEE Conference on Computer Vision and Pattern Recognition, Apr. 2018.
[17] Sakai, T., Nagao, M., and Fujibayashi, S., “Line extraction and pattern detection in a photograph,” Pattern Recognit., vol. 1, no. 3, pp. 233-248, 1969.
[18] Schroff, F., Kalenichenko, D., and Philbin, J., “FaceNet: A unified embedding for face recognition and clustering,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 815-823, 2015.
[19] Simonyan, K., and Zisserman, A., “Very Deep Convolutional Networks for Large-Scale Image Recognition,” The International Conference on Learning Representations, 2015.
[20] Sun, Y., Wang, X., and Tang, X., “Deep learning face representation from predicting 10,000 classes,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 1891-1898, 2014.
[21] Taigman, Y., Yang, M., Ranzato, M., and Wolf, L., “DeepFace: Closing the gap to human-level performance in face verification,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 1701-1708, 2014.
[22] Turk, M. A., and Pentland, A. P., “Face recognition using eigenfaces,” IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., pp. 586-591, 1991.
[23] Veer, N. D., and Momin, B. F., “An automated attendance system using video surveillance camera,” in 2016 RTEICT, pp. 1731-1735, 2017.
[24] Viola, P., and Jones, M. “Rapid object detection using a boosted cascade of simple features,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 1, 2001.
[25] Wang, Y., and Zheng, J., “Real-time face detection based on YOLO,” in 1st IEEE International Conference on Knowledge Innovation and Invention, ICKII 2018, pp. 221-224, 2018.
[26] Wei, C., Tongfeng, S., Xiaodong, Y., and Li, W., “Face detection based on half face-template,” in ICEMI 2009 - Proceedings of 9th International Conference on Electronic Measurement and Instruments, pp. 454-458, 2009.
[27] Yan, C., Wang, Z., Yang, Y., and Xu, C., “Multi-angle face detection based on Gaussian mixture model and improved template matching,” in Proceedings of the 30th Chinese Control and Decision Conference, CCDC 2018, pp. 5584-5588, 2018.
[28] Yang, M. H., Kriegman, D. J., and Ahuja, N., “Detecting faces in images: A survey,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 24, no. 1, pp. 34-58, 2002.
[29] Yi-Ki, W., Yu-Xiang, Z., and Chow-Sing, L., “Classiface: Real-time Face Recognition Based on Multi-Task Convolution Neural Network,” Int. J. Sci. Eng., vol. 8:1, no. 1, pp. 15-28, 2018.
[30] Zhang, H., Liu, W., Dong, L., and Wang, Y., “Sparse eigenfaces analysis for recognition,” in International Conference on Signal Processing Proceedings, ICSP, pp. 887-890, 2014.