簡易檢索 / 詳目顯示

研究生: 張維德
Wei-Te Chang
論文名稱: 適用於可見光環境之高速眼動儀系統設計
Design of High Speed Gaze Tracking System Under Visible Light Condition
指導教授: 高文忠
Kao, Wen-Chung
學位類別: 碩士
Master
系所名稱: 電機工程學系
Department of Electrical Engineering
論文出版年: 2013
畢業學年度: 101
語文別: 中文
論文頁數: 89
中文關鍵詞: 可見光眼動儀高速
英文關鍵詞: visible light, gaze tracking, high speed
論文種類: 學術論文
相關次數: 點閱:220下載:4
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 多數的眼球追蹤系統依靠較強的紅外光(IR)增強亮度,以達到不錯的準確度,但對使用者來說會略感不適。在本篇論文中,我們提出一可運行於一般光源的高速眼動儀系統,能夠克服光源不足以及當取樣率提升,所導致的光源閃爍問題。
    本論文提出的系統包含四大部分,如何在影像中決定適當的搜尋範圍,如何透過特徵辨識找出受試者可能注視的方向,如何在搜尋範圍內找出合適的特徵並決定凝視點,最後計算如何由影像中的凝視點映射到螢幕上的凝視點,以及本文所提出的系統準確度計算。在240 fps的取樣率下,能夠達到水平方向0.76度與垂直方向1.43度的準確度。

    Most gaze tacking system relies on strong IR light to enhance illumination and fulfill high accurate rate, thus this results in poor user experience. In this thesis, we present a high speed gaze tracking system which is operated in a normal lighting condition and overcome the insufficient of lightness and flicker coursed by high speed capture rate.
    The proposed system contains four parts: how to decide the search space in image, how to roughly get the gaze direction by pattern recognition technique, how to extract useful features under known search space, how to calculate the mapping function from image coordinate to screen coordinate and finally, how to get the accurate rate of this system. Under 240 frames/s sample rate, this system can achieve the accuracy of 0.76 degree and 1.43 degree for the horizontal and vertical coordinates of the detected gaze center, respectively.

    中文摘要 I 英文摘要 II 致  謝 III 目  錄 VI 圖 目 錄 VIII 表 目 錄 XI 第一章  緒論 1 1. 1 研究動機與背景 1 1. 2 眼動儀相關研究概述 3 1. 3 現有可見光眼動儀系統問題描述與本文提出的系統功能概述 5 1. 4 本文架構 6 第二章  眼動儀相關研究與探討 7 2. 1 非侵入式眼動儀介紹 7 2. 1. 1 紅外光眼動儀系統介紹 7 2. 1. 2 可見光眼動儀系統介紹 9 2. 2 現有可見光眼動儀介紹 9 2. 3 本篇提出的眼動儀系統架構 13 第三章  高速眼動儀系統架構 14 3. 1 系統架構圖 14 3. 2 系統流程設計理念 15 3.2.1 對比增強演算法比較 : 色調重現與直方圖等化演算法 16 3.2.2 Starburst Algorithm 介紹與存在問題 19 3.2.3 霍夫圓搭配最小平方法概念介紹 23 第四章  高速眼動儀演算法流程 26 4. 1 定位眼睛區域 26 4. 1. 1 影像對比增強 26 4. 1. 2 定位眼睛可能範圍 32 4. 1. 3 膚色偵測判別膚色與非膚色特徵 33 4. 1. 4 重建眼睛範圍 35 4. 2 估計眼睛位置 41 4. 2. 1 閉眼影像偵測 41 4. 2. 2 眼睛局部特徵增強 42 4. 2. 3 眼睛特徵抽取 – DCT 43 4. 2. 4 眼睛特徵訓練與預測 44 4. 3 定位虹膜中心 45 4. 3. 1 眼睛反光點去除 45 4. 3. 2 定位可能的虹膜中心 48 4. 3. 3 虹膜特徵抽取 48 4. 3. 4 求得最適虹膜的圓 52 4. 4 凝視點計算與校正 55 4. 4. 1 分離各群虹膜中心點 55 4. 4. 2 虹膜中心點篩選 56 4. 4. 3 凝視點座標轉換參數計算 56 4. 4. 4 取得眼睛凝視點 57 第五章  實驗結果 58 5. 1 實驗環境介紹 58 5. 2 高速眼動儀模組分析與測試 59 5. 3 受試者實驗結果 68 第六章  結論與未來展望 77 6. 1 結論 77 6. 2 未來展望 78 參考文獻 79 自傳 85

    [1]. SMI (SensoMotoric Instruments). [Online] Available at: http://www.smivision.com
    [2]. Tobii. [Online] Availabe at: http://www.tobii.com
    [3]. Eye Tracking. [Online] Available at http://en.wikipedia.org/wiki/Eye_tracking
    [4]. E. B. Huey, “Preliminary experiments in the physiology and psychology of reading,” American Journal of Psychology, vol. 9, no. 4, pp. 575-586, Jul. 1897
    [5]. R. Dodge and S. T. Cline, “The angle velocity of eye movements,” Psychological Review, vol. 8, no. 2, pp. 145-157, Mar. 1901
    [6]. M. A. Just and A. P. Carpenter, “A theory of reading: from eye fixation to comprehension,” Psychological Review, vol. 87, no. 4, pp. 329-354, Jul. 1980
    [7]. I. M. Posner, “Orienting of attention,” Quarterly Journal of Experimental Psychology, vol. 32, no. 1, pp. 3-25, May 1980
    [8]. Z. Zhu and Q. Ji, “Eye and gaze tracking for interactive graphic display,” Machine Vision and Applications, vol. 15, no. 3, pp. 139-148, Jun. 2004
    [9]. T.E. Hutchinson, K. P. White Jr., K. C. Reichert, and L. A. Frey, “Human-computer interaction using eye-gaze input,” IEEE Trans. on System, Man, and Cybernetics, vol. 19, no. 6, pp. 1527-1533, Aug. 1989
    [10]. Y. Ebisawa, M. Ohtani, and A. Sugioka, “Proposal of a zoom and focus control method using an ultrasonic distance-meter for video-based eye-gaze detection under free-hand condition,” in Proc. of the 18th Annual Int. Conf. of the IEEE Eng. in Medicine and Biology Society, 1996
    [11]. C. H. Morimoto, D. Koons, A. Amir, and M. Flickner, “Frame-rate pupil detector and gaze tracker,” IEEE ICCV’99 Frame-rate Workshop, 1999
    [12]. Yoshinobu Ebisawa, “Improved video-based eye-gaze detection method,” IEEE Trans. on Instrumentation and Measurement, vol. 47, no. 4, Aug. 1998
    [13]. Craig Hennessey and Peter Lawrence, “Fixation precision in high-speed noncontact eye-gaze tracking,” IEEE Trans. on System, Man, and Cyberneics – Part B : Cybernetics, vol. 38, no. 2, pp. 289-298, Apr. 2008
    [14]. Z. Zhu and Q. Ji, “Novel eye gaze tracking techniques under natural head movement,” IEEE Trans. on Biomedical Engineering, vol. 54, no. 12, pp. 2246-2260, Dec. 2007
    [15]. Craig Hennessey and Peter Lawrence, “Noncontact binocular eye-gaze tracking for point- of gaze estimation in three dimensions,” IEEE Trans. on Biomedical Engineering, vol. 56, no. 3, pp. 790-799, Mar. 2009
    [16]. Yung-Lung Kuo, Jiann-Shu Lee and Sho-Tsung Kao, “Eye tracking in visible environment,” IEEE Int. Conf. on Intelligent Information Hiding and Multimedia Signal Processing, Kyoto, Japan, Sep. 2009, pp. 114-117
    [17]. Nyguyen Huu Cuong and Huynh Thai Hoang, “Eye-gaze detection with a single webcam based on geometry features extraction,” IEEE Int. Conf. on Control, Automation, Robotics and Vision, Singapore, Dec. 2010, pp. 2507-2512
    [18]. Jose Sigut and Sid-Ahmed Sidha, “Iris center corneal reflection method for gaze tracking using visible light,” IEEE Trans. on Biomedical Engineering, vol. 58, no. 2, pp. 411-419, Feb. 2011
    [19]. M. Sanjeev Arulampalam, Simon Maskell, Neil Gordon, and Tim Clapp, “A tutorial on particle filters for online nonlinear/non-gaussian bayesian tracking,” IEEE Trans. on Signal Processing, vol. 50. no. 2, pp. 174-188, Feb. 2002
    [20]. N. Maillot, M. Thonnat and C. Hudelot, “Ontology based object learning and recognition: application to image retrieval,” in Proc. IEEE Int. Conf. on Tools with Artificial Intelligence, Boca Raton, Florida, US, Nov. 2004, pp.620- 625
    [21]. C. Harris and M. Stephens, “A combined corner and edge detector,” in Proc. of the 4th Alvey Vision Conference,1988, pp. 147-151.
    [22]. G. Derpanis, “The Harris corner detector,” York University, Oct. 2004.
    [23]. P.Viola and M. J. Jones, “Robust real-time face detection,” Int. J. Comput. Vis., vol. 57, no. 2, pp. 137–154, May 2004.
    [24]. J. Canny, “A computational approach to edge detection,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. PAMI-8, no. 6, pp. 679-698, Jan. 1986
    [25]. Martin A. Fischler and Robert C. Bolles, “Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography,” Commun. Ass. Comput. Mach., vol. 24, no. 6, pp. 381-395, Jun. 1981
    [26]. D. A. Forsyth and J. Ponce, Computer Vision A Modern Approach, Prentice Hall, 2002
    [27]. K. P. White, Jr., T. E. Hutchinson, and J. M. Carley, “Spatially dynamic calibration of an eye-tracking system,” IEEE Trans. Syst., Man, Cybern., vol. 23, no. 4, pp. 1162–1168, Jul./Aug. 1993.
    [28]. K. H. Tan, D. J. Kriegman, and N. Ahuja, “Appearance-based eye gaze estimation,” in Proc. IEEE Workshop on Appl. Comput. Vis., Orlando, Florida, US, Dec. 2002, pp. 191–195
    [29]. D. Beymer and M. Flickner, “Eye gaze tracking using an active stereo head,” in Proc. Int. Conf. Comput. Vis. Pattern Recognit., Madison, Wisconsin, US, Jun. 2003, pp. 451–458.
    [30]. C. H. Morimoto, D. Koons, A. Amir, and M. Flickner, “Pupil detection and tracking using multiple light sources,” Image Vis. Comput., vol. 18, no. 4, pp. 331–336, Mar. 2000.
    [31]. Z. W. Zhu and Q. Ji, “Eye and gaze tracking for interactive graphic display,” Mach. Vis. Appl., vol. 15, no. 3, pp. 139–148, Feb. 2004.
    [32]. C. Hennessey, B. Norueddin and P. Lawrence. “Fixation precision in high-speed noncontact eye-gaze tracking,” IEEE Trans. on System, Man, and Cyberneics – Part B : Cybernetics, vol. 38, no. 2, no. 289-298, Apr. 2008
    [33]. D. Li, D. Windield, and D. J. Parkhurst, “Starburst: a hybrid algorithm for video-based eye tracking combining feature-based and model-based approaches,” IEEE Computer Society Conf. on Computer Vision and Pattern Recognition (CVPRW’05) Workshops, San Diego, CA, US, Jun. 2005, pp. 79
    [34]. F. Durand and J. Dorsey. “Fast bilateral filtering for the display of high-dynamic-range images,” ACM Trans. on Graphics, vol. 21, no. 3, pp. 257-266, Jul. 2002.
    [35]. Raanan Fattal, Dani Lischinski and Michael Werman, “Gradient domain high dynamic range compression,” ACM Trans. on Graphics, vol. 21, no. 3, pp. 249-256, Jul. 2002
    [36]. Erik Reinhard and Kate Devlin, “Dynamic range reduction inspired by photoreceptor physiology,” IEEE Trans. on Visualization and Computer Graphics, vol. 11, no. 1, pp. 13-24, Feb. 2005
    [37]. P. E. Debevec, J. Malik, “Recovering high dynamic range radiance maps from photographs,” in Proc. ACM SIGGRAPH Int. Conf. and Exhibition on Computer Graphics and Interactive Techniques, Los Angeles, California, USA, Aug. 1997 pp. 369-378.
    [38]. Wen-Chung Kao, Xiang-Ting Huang, Hung-Chun Wang, Chih-Chen Pan, and Feng-Che Yang, “Real-time tone reproduction for video recording, ” in Proc. IEEE Int. Symp. on Consumer Electronics (ISCE), Harrisburg, US, Jun. 2012. pp. 1-2
    [39]. R. C. Gonzalez and R. E. Woods, Digital Image Processing, 2nd edition, New Jersey, Prentice Hall, 2002
    [40]. E. Reinhard, “Parameter estimation for photographic tone reproduction,” Journal of Graphics Tools, vol. 7, no. 1, pp. 45-52, Nov. 2002.
    [41]. E. Reinhard, M. Stark, P. Shirley, and J. Ferwerda, “Photographic tone reproduction for digital images,” ACM Trans. on Graphics, vol. 21, no. 3, pp. 267-276, Jul. 2002
    [42]. K. Sobottka and I. Pitas, “A novel method for automatic face segmentation, facial feature extraction and tracking,” Signal Processing: Image Comm., vol. 12, no. 3, pp. 263-281, Jun. 1998
    [43]. D. Chai and K. N. Ngan, “Face segmentation using skin color map in videophone applications,” IEEE Trans. on Circuits and Systems for Video Technology, vol. 9, no. 4, pp. 551-564, Jun. 1999
    [44]. H. Li and R. Forchheimer, “Location of face using color cues,” in Proc. Picture Coding Symp., Lausanne, Switzerland, Mar. 1993, paper 2.4
    [45]. Y. J. Wang and B. Z. Yuan, “A novel approach for human face detection from color images under complex background,” Pattern Recognition, vol. 34, no. 10, pp. 1983-1992, Jun. 2001
    [46]. S. L. Phung, A. Bouzerdoum, and D. Chai “Skin segmentation using color pixel classification: analysis and comparison,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 27, no. 1, Jan. 2005
    [47]. R. L. Hsu, M. Abdel-Mottaleb, and Anil K. Jain, “Face detection in color images,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 24, no. 5, pp. 696-706, May 2002
    [48]. C. J. Lin, “A formal analysis of stopping criteria of decomposition methods for support vector machines,” IEEE Trans. on Neural Networks, vol. 13, no. 5, pp. 1045-1052, Mar. 2002
    [49]. R. Gonzalez and R. Woods, Digital Image Processing, Addison Wesley, 1992, pp. 414 - 428.
    [50]. W. Gander, G.H. Golub, and R. Strebel, “Least-squares fitting of circles and ellipses” BIT Numerical Mathematics, vol. 34, no. 4, pp. 558-578, Dec. 1994
    [51]. V.F. Leavers, “Shape detection in computer vision using the hough transform,” Springer-Verlag, 1992
    [52]. K. P. White, Jr., T. E. Hutchinson, and J. M. Carley, “Spatially dynamic calibration of an eye-tracking system,” IEEE Trans. Syst., Man, Cybern., vol. 23, no. 4, pp. 1162–1168, Jul./Aug. 1993.
    [53]. Z. R. Cherif, A. Naït-Ali, J F. Motsch, and M. O. Krebs, “An adaptive calibration of an infrared light device used for gaze tracking,” in Proc. IEEE Instrum. Meas. Tech. Conf., Anchorage, Alaska, US, May 2002, vol. 2, pp. 1029–1033
    [54]. N. Ramanauskas, “Calibration of video-oculographical eye-tracking system,” Electron. Electr. Eng., vol. 8, no. 72, pp. 65-68, May 2006
    [55]. L. N. Trefethen and D. Bau III, Numerical Linear Algebra, Philadelphia: Society for Industrial and Applied Mathematics
    [56]. Casio ZR-100. [Online]. Available at: http://www.casio-intl.com/asia/zh/dc/lineup/
    [57]. OpenCV Computer Vision Library (2013). [Online]. Available at: http://opencv.willowgarage. com/wiki
    [58]. Freund, Yoav, Schapire, Robert E. “A decision-theoretic generalization of on-line learning and an application to boosting,” in Proc. of the Second European Conference on Computational Computational Learning Theory, Barcelona, Spain, Mar. 1995, vol. 904, pp. 23-37
    [59]. SR Research. [Online] Available at: http://www.sr-research.com

    下載圖示
    QR CODE