研究生: |
張維德 Wei-Te Chang |
---|---|
論文名稱: |
適用於可見光環境之高速眼動儀系統設計 Design of High Speed Gaze Tracking System Under Visible Light Condition |
指導教授: |
高文忠
Kao, Wen-Chung |
學位類別: |
碩士 Master |
系所名稱: |
電機工程學系 Department of Electrical Engineering |
論文出版年: | 2013 |
畢業學年度: | 101 |
語文別: | 中文 |
論文頁數: | 89 |
中文關鍵詞: | 可見光 、眼動儀 、高速 |
英文關鍵詞: | visible light, gaze tracking, high speed |
論文種類: | 學術論文 |
相關次數: | 點閱:220 下載:4 |
分享至: |
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
多數的眼球追蹤系統依靠較強的紅外光(IR)增強亮度,以達到不錯的準確度,但對使用者來說會略感不適。在本篇論文中,我們提出一可運行於一般光源的高速眼動儀系統,能夠克服光源不足以及當取樣率提升,所導致的光源閃爍問題。
本論文提出的系統包含四大部分,如何在影像中決定適當的搜尋範圍,如何透過特徵辨識找出受試者可能注視的方向,如何在搜尋範圍內找出合適的特徵並決定凝視點,最後計算如何由影像中的凝視點映射到螢幕上的凝視點,以及本文所提出的系統準確度計算。在240 fps的取樣率下,能夠達到水平方向0.76度與垂直方向1.43度的準確度。
Most gaze tacking system relies on strong IR light to enhance illumination and fulfill high accurate rate, thus this results in poor user experience. In this thesis, we present a high speed gaze tracking system which is operated in a normal lighting condition and overcome the insufficient of lightness and flicker coursed by high speed capture rate.
The proposed system contains four parts: how to decide the search space in image, how to roughly get the gaze direction by pattern recognition technique, how to extract useful features under known search space, how to calculate the mapping function from image coordinate to screen coordinate and finally, how to get the accurate rate of this system. Under 240 frames/s sample rate, this system can achieve the accuracy of 0.76 degree and 1.43 degree for the horizontal and vertical coordinates of the detected gaze center, respectively.
[1]. SMI (SensoMotoric Instruments). [Online] Available at: http://www.smivision.com
[2]. Tobii. [Online] Availabe at: http://www.tobii.com
[3]. Eye Tracking. [Online] Available at http://en.wikipedia.org/wiki/Eye_tracking
[4]. E. B. Huey, “Preliminary experiments in the physiology and psychology of reading,” American Journal of Psychology, vol. 9, no. 4, pp. 575-586, Jul. 1897
[5]. R. Dodge and S. T. Cline, “The angle velocity of eye movements,” Psychological Review, vol. 8, no. 2, pp. 145-157, Mar. 1901
[6]. M. A. Just and A. P. Carpenter, “A theory of reading: from eye fixation to comprehension,” Psychological Review, vol. 87, no. 4, pp. 329-354, Jul. 1980
[7]. I. M. Posner, “Orienting of attention,” Quarterly Journal of Experimental Psychology, vol. 32, no. 1, pp. 3-25, May 1980
[8]. Z. Zhu and Q. Ji, “Eye and gaze tracking for interactive graphic display,” Machine Vision and Applications, vol. 15, no. 3, pp. 139-148, Jun. 2004
[9]. T.E. Hutchinson, K. P. White Jr., K. C. Reichert, and L. A. Frey, “Human-computer interaction using eye-gaze input,” IEEE Trans. on System, Man, and Cybernetics, vol. 19, no. 6, pp. 1527-1533, Aug. 1989
[10]. Y. Ebisawa, M. Ohtani, and A. Sugioka, “Proposal of a zoom and focus control method using an ultrasonic distance-meter for video-based eye-gaze detection under free-hand condition,” in Proc. of the 18th Annual Int. Conf. of the IEEE Eng. in Medicine and Biology Society, 1996
[11]. C. H. Morimoto, D. Koons, A. Amir, and M. Flickner, “Frame-rate pupil detector and gaze tracker,” IEEE ICCV’99 Frame-rate Workshop, 1999
[12]. Yoshinobu Ebisawa, “Improved video-based eye-gaze detection method,” IEEE Trans. on Instrumentation and Measurement, vol. 47, no. 4, Aug. 1998
[13]. Craig Hennessey and Peter Lawrence, “Fixation precision in high-speed noncontact eye-gaze tracking,” IEEE Trans. on System, Man, and Cyberneics – Part B : Cybernetics, vol. 38, no. 2, pp. 289-298, Apr. 2008
[14]. Z. Zhu and Q. Ji, “Novel eye gaze tracking techniques under natural head movement,” IEEE Trans. on Biomedical Engineering, vol. 54, no. 12, pp. 2246-2260, Dec. 2007
[15]. Craig Hennessey and Peter Lawrence, “Noncontact binocular eye-gaze tracking for point- of gaze estimation in three dimensions,” IEEE Trans. on Biomedical Engineering, vol. 56, no. 3, pp. 790-799, Mar. 2009
[16]. Yung-Lung Kuo, Jiann-Shu Lee and Sho-Tsung Kao, “Eye tracking in visible environment,” IEEE Int. Conf. on Intelligent Information Hiding and Multimedia Signal Processing, Kyoto, Japan, Sep. 2009, pp. 114-117
[17]. Nyguyen Huu Cuong and Huynh Thai Hoang, “Eye-gaze detection with a single webcam based on geometry features extraction,” IEEE Int. Conf. on Control, Automation, Robotics and Vision, Singapore, Dec. 2010, pp. 2507-2512
[18]. Jose Sigut and Sid-Ahmed Sidha, “Iris center corneal reflection method for gaze tracking using visible light,” IEEE Trans. on Biomedical Engineering, vol. 58, no. 2, pp. 411-419, Feb. 2011
[19]. M. Sanjeev Arulampalam, Simon Maskell, Neil Gordon, and Tim Clapp, “A tutorial on particle filters for online nonlinear/non-gaussian bayesian tracking,” IEEE Trans. on Signal Processing, vol. 50. no. 2, pp. 174-188, Feb. 2002
[20]. N. Maillot, M. Thonnat and C. Hudelot, “Ontology based object learning and recognition: application to image retrieval,” in Proc. IEEE Int. Conf. on Tools with Artificial Intelligence, Boca Raton, Florida, US, Nov. 2004, pp.620- 625
[21]. C. Harris and M. Stephens, “A combined corner and edge detector,” in Proc. of the 4th Alvey Vision Conference,1988, pp. 147-151.
[22]. G. Derpanis, “The Harris corner detector,” York University, Oct. 2004.
[23]. P.Viola and M. J. Jones, “Robust real-time face detection,” Int. J. Comput. Vis., vol. 57, no. 2, pp. 137–154, May 2004.
[24]. J. Canny, “A computational approach to edge detection,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. PAMI-8, no. 6, pp. 679-698, Jan. 1986
[25]. Martin A. Fischler and Robert C. Bolles, “Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography,” Commun. Ass. Comput. Mach., vol. 24, no. 6, pp. 381-395, Jun. 1981
[26]. D. A. Forsyth and J. Ponce, Computer Vision A Modern Approach, Prentice Hall, 2002
[27]. K. P. White, Jr., T. E. Hutchinson, and J. M. Carley, “Spatially dynamic calibration of an eye-tracking system,” IEEE Trans. Syst., Man, Cybern., vol. 23, no. 4, pp. 1162–1168, Jul./Aug. 1993.
[28]. K. H. Tan, D. J. Kriegman, and N. Ahuja, “Appearance-based eye gaze estimation,” in Proc. IEEE Workshop on Appl. Comput. Vis., Orlando, Florida, US, Dec. 2002, pp. 191–195
[29]. D. Beymer and M. Flickner, “Eye gaze tracking using an active stereo head,” in Proc. Int. Conf. Comput. Vis. Pattern Recognit., Madison, Wisconsin, US, Jun. 2003, pp. 451–458.
[30]. C. H. Morimoto, D. Koons, A. Amir, and M. Flickner, “Pupil detection and tracking using multiple light sources,” Image Vis. Comput., vol. 18, no. 4, pp. 331–336, Mar. 2000.
[31]. Z. W. Zhu and Q. Ji, “Eye and gaze tracking for interactive graphic display,” Mach. Vis. Appl., vol. 15, no. 3, pp. 139–148, Feb. 2004.
[32]. C. Hennessey, B. Norueddin and P. Lawrence. “Fixation precision in high-speed noncontact eye-gaze tracking,” IEEE Trans. on System, Man, and Cyberneics – Part B : Cybernetics, vol. 38, no. 2, no. 289-298, Apr. 2008
[33]. D. Li, D. Windield, and D. J. Parkhurst, “Starburst: a hybrid algorithm for video-based eye tracking combining feature-based and model-based approaches,” IEEE Computer Society Conf. on Computer Vision and Pattern Recognition (CVPRW’05) Workshops, San Diego, CA, US, Jun. 2005, pp. 79
[34]. F. Durand and J. Dorsey. “Fast bilateral filtering for the display of high-dynamic-range images,” ACM Trans. on Graphics, vol. 21, no. 3, pp. 257-266, Jul. 2002.
[35]. Raanan Fattal, Dani Lischinski and Michael Werman, “Gradient domain high dynamic range compression,” ACM Trans. on Graphics, vol. 21, no. 3, pp. 249-256, Jul. 2002
[36]. Erik Reinhard and Kate Devlin, “Dynamic range reduction inspired by photoreceptor physiology,” IEEE Trans. on Visualization and Computer Graphics, vol. 11, no. 1, pp. 13-24, Feb. 2005
[37]. P. E. Debevec, J. Malik, “Recovering high dynamic range radiance maps from photographs,” in Proc. ACM SIGGRAPH Int. Conf. and Exhibition on Computer Graphics and Interactive Techniques, Los Angeles, California, USA, Aug. 1997 pp. 369-378.
[38]. Wen-Chung Kao, Xiang-Ting Huang, Hung-Chun Wang, Chih-Chen Pan, and Feng-Che Yang, “Real-time tone reproduction for video recording, ” in Proc. IEEE Int. Symp. on Consumer Electronics (ISCE), Harrisburg, US, Jun. 2012. pp. 1-2
[39]. R. C. Gonzalez and R. E. Woods, Digital Image Processing, 2nd edition, New Jersey, Prentice Hall, 2002
[40]. E. Reinhard, “Parameter estimation for photographic tone reproduction,” Journal of Graphics Tools, vol. 7, no. 1, pp. 45-52, Nov. 2002.
[41]. E. Reinhard, M. Stark, P. Shirley, and J. Ferwerda, “Photographic tone reproduction for digital images,” ACM Trans. on Graphics, vol. 21, no. 3, pp. 267-276, Jul. 2002
[42]. K. Sobottka and I. Pitas, “A novel method for automatic face segmentation, facial feature extraction and tracking,” Signal Processing: Image Comm., vol. 12, no. 3, pp. 263-281, Jun. 1998
[43]. D. Chai and K. N. Ngan, “Face segmentation using skin color map in videophone applications,” IEEE Trans. on Circuits and Systems for Video Technology, vol. 9, no. 4, pp. 551-564, Jun. 1999
[44]. H. Li and R. Forchheimer, “Location of face using color cues,” in Proc. Picture Coding Symp., Lausanne, Switzerland, Mar. 1993, paper 2.4
[45]. Y. J. Wang and B. Z. Yuan, “A novel approach for human face detection from color images under complex background,” Pattern Recognition, vol. 34, no. 10, pp. 1983-1992, Jun. 2001
[46]. S. L. Phung, A. Bouzerdoum, and D. Chai “Skin segmentation using color pixel classification: analysis and comparison,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 27, no. 1, Jan. 2005
[47]. R. L. Hsu, M. Abdel-Mottaleb, and Anil K. Jain, “Face detection in color images,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 24, no. 5, pp. 696-706, May 2002
[48]. C. J. Lin, “A formal analysis of stopping criteria of decomposition methods for support vector machines,” IEEE Trans. on Neural Networks, vol. 13, no. 5, pp. 1045-1052, Mar. 2002
[49]. R. Gonzalez and R. Woods, Digital Image Processing, Addison Wesley, 1992, pp. 414 - 428.
[50]. W. Gander, G.H. Golub, and R. Strebel, “Least-squares fitting of circles and ellipses” BIT Numerical Mathematics, vol. 34, no. 4, pp. 558-578, Dec. 1994
[51]. V.F. Leavers, “Shape detection in computer vision using the hough transform,” Springer-Verlag, 1992
[52]. K. P. White, Jr., T. E. Hutchinson, and J. M. Carley, “Spatially dynamic calibration of an eye-tracking system,” IEEE Trans. Syst., Man, Cybern., vol. 23, no. 4, pp. 1162–1168, Jul./Aug. 1993.
[53]. Z. R. Cherif, A. Naït-Ali, J F. Motsch, and M. O. Krebs, “An adaptive calibration of an infrared light device used for gaze tracking,” in Proc. IEEE Instrum. Meas. Tech. Conf., Anchorage, Alaska, US, May 2002, vol. 2, pp. 1029–1033
[54]. N. Ramanauskas, “Calibration of video-oculographical eye-tracking system,” Electron. Electr. Eng., vol. 8, no. 72, pp. 65-68, May 2006
[55]. L. N. Trefethen and D. Bau III, Numerical Linear Algebra, Philadelphia: Society for Industrial and Applied Mathematics
[56]. Casio ZR-100. [Online]. Available at: http://www.casio-intl.com/asia/zh/dc/lineup/
[57]. OpenCV Computer Vision Library (2013). [Online]. Available at: http://opencv.willowgarage. com/wiki
[58]. Freund, Yoav, Schapire, Robert E. “A decision-theoretic generalization of on-line learning and an application to boosting,” in Proc. of the Second European Conference on Computational Computational Learning Theory, Barcelona, Spain, Mar. 1995, vol. 904, pp. 23-37
[59]. SR Research. [Online] Available at: http://www.sr-research.com