Basic Search / Detailed Display

Author: 洪銘鴻
Hong, Ming-Hong
Thesis Title: 基於邊緣計算和深度學習之病媒蚊分類系統
A Vector Mosquitoes Classification System Based on Edge Computing and Deep Learning
Advisor: 陳伶志
Chen, Ling-Jyh
Degree: 碩士
Master
Department: 資訊工程學系
Department of Computer Science and Information Engineering
Thesis Publication Year: 2019
Academic Year: 107
Language: 中文
Number of pages: 50
Keywords (in Chinese): 登革熱深度學習邊緣計算卷積神經網路影像處理電腦視覺
DOI URL: http://doi.org/10.6345/NTNU202000442
Thesis Type: Academic thesis/ dissertation
Reference times: Clicks: 176Downloads: 19
Share:
School Collection Retrieve National Library Collection Retrieve Error Report
  • 由於登革熱與日本腦炎是由病毒所引起的一種傳染病,會經由蚊子傳播給人類。在最近一次 2015 年的台南市爆發登革熱的疫情,最初只出現在台南市北部地區,接著以驚人的速度擴散到全台南市,最終蔓延至台灣全島。當年,確診病例超過 4 萬人,死亡病例也高達 218 人,而且未發病的感染者約為發病者的九倍至十倍。若患者再次被病媒蚊叮咬造成交叉感染,則重症死亡率會大幅度提升至 20%以上,而且目前沒有預防疫苗,也沒有特效藥物可治療,而引發登革熱的病媒蚊為埃及斑蚊(Aedes aegypti)與白線斑蚊 (Aedes albopictus)。而日本腦炎的致死率大約為 20%以上,存活病例約有 40%有神經性或精神性的後遺症,而且亦目前沒有特效藥可治療,引發日本腦炎的病媒蚊為三斑家蚊(Culex tritaeniorhynchus)與環蚊家蚊(Culex annulus),避免病媒蚊叮咬是目前唯一的預防登革熱及日本腦炎的方法。
    為解決登革熱與日本腦炎問題,本篇論文提出病媒蚊分類系統,這是一套影像分類準確率高達 98%以及計數功能的智慧捕蚊系統,其中包含邊緣計算、深度學習的影像處理和 電腦視覺,主要功能在邊緣計算為物體偵測,深度學習為斑蚊分類與計數,透過這些步驟,改善了現今捕蚊燈、滅蚊燈不能分類 (Classification)蚊子種類。並以智慧捕蚊裝置收集影像資料,主要資料收集與處理正是引發登革熱的兩種台灣常見的病媒蚊種類──白線斑蚊與埃及斑蚊以及引發日本腦炎的兩種台灣常見的病媒蚊種類──三斑家蚊與環蚊家蚊,並在分類時以斑蚊 (Aedes) 和家蚊 (Culex) 進行二元分類,由於此系統與裝置獲得更多台灣蚊子資訊,其資訊包含進入捕蚊燈的蚊子數量、種類以及時間、地點,以便後續作為對病媒蚊採取措施的重要參考依據。

    Dengue fever and Japanese encephalitis are mosquito-borne diseases which are infectious diseases caused by viruses, they. It is particularly dangerous for children and can lead to death, less than 1 percent of cases cause fatalities even with proper medical care, according to the World Health Organization (WHO). However, dengue fever symptoms which may include a high fever, headache joint pains and muscle, and a skin rash. typically begin three days to two weeks after infection. In the most recent outbreak of dengue fever in Tainan City, Taiwan in 2015. It first appeared only in the northern part of Tainan City, then it spread to all over Tainan City at an alarming rate ,and eventually it spread to all over whole Taiwan islands. In that year, the number of confirmed cases exceeded 40,000, and the number of death reached 218. Actually, there are no vaccines for prevention and there are no specific drugs for treatment. However, the mosquitoes that cause dengue fever are Aedes aegypti and Aedes albopictus, and the mosquito that causes Japanese encephalitis are Culex tritaeniorhynchus and Culex annulus. The most importantly, avoiding vector mosquitoes bites is the only way to prevent dengue fever and Japanese encephalitis. In order to alleviate the problem of dengue fever and Japanese encephalitis, this paper proposes the vector mosquitoes classification system. This system is a intelligent mosquitocatching system with image classification accuracy of up to 98%. This system includes Edge Computing, Deep Learning Image Processing, and Computer Vision to improve the problem of classification in mosquito traps and mosquito killer lamp. The main data collection and processing are the two species of mosquitoes common in Taiwan causing Dengue fever: Aedes albopictus and Aedes aegypti, and two types of Taiwanese common mosquitoes that cause Japanese encephalitis. Culex tritaeniorhynchus and Culex annulus. In this paper, Aedes and Culex were used for binary classification. This system and device will obtain more information on mosquitoes in Taiwan, the information includes the number , type and time and place of vector mosquitoes. This can provide important information to take measures against vector mosquitoes.

    附圖目錄 VI 第一章 緒論 1 第二章 文獻探討 3 第一節 蚊子分類相關研究 3 第二節 物體偵測研究背景 3 2.2.1 背景相減法 (Background Subtraction) 4 2.2.2 高斯混合模型 (Gaussian Mixture Model, GMM) 5 第三節 多物體追蹤研究背景 7 2.3.1 k-平均 (k-means) 7 2.3.2 光流算法 (Median Flow) 8 第四節 卷積神經網路研究背景 9 2.3.1 卷積層 (Convolution Layer) 9 2.3.2 激活函數 (Activation Function) 10 2.3.3 池化層 (Pooling Layer) 11 2.3.4 全連接層 (Fully Connected Layer) 12 第三章 病媒蚊分類系統 13 第一節 系統架構 13 第二節 硬體開發 14 3.2.1 硬體介紹 14 3.2.2 硬體最佳化 15 第三節 感興趣區域 17 3.3.1 彩色轉灰階 17 3.3.2 背景更新 18 3.3.3 前景面積計算 19 3.3.4 物體偵測門檻設立 20 3.3.5 多物體二維定位 21 3.3.6 感興趣區域 26 第四節 卷積神經網路 26 第四章 實驗分析 28 第一節 以混和資料集訓練卷積神經網路 28 4.1.1 訓練資料設定 28 4.1.2 卷積神經網路架構混合資料模型參數設定 29 4.1.3 測試方法1 31 4.1.4 測試方法2 32 4.1.5 測試方法3 33 第二節 非混合資料集訓練卷積神經網路 34 4.2.1 訓練資料設定 34 4.2.2 卷積神經網路非混合資料模型參數設定 36 4.2.3 代入測試資料辨識病媒蚊影格 37 4.2.4 代入測試資料辨識病媒蚊影片 38 4.2.5 混合資料模型與非混合資料模型 39 第三節 系統實作 40 4.3.1 實驗設定 41 4.3.2 實驗環境 41 4.3.3 資料收集結果 42 4.3.4 資料收集小結 43 第五章 結論與未來展望 44 第六章 附錄 45 第一節 改變最佳化函數對訓練的影響 45 第二節 改變激活函數對訓練的影響 46 第七章 參考文獻 47

    [1] Versteirt, V., Nagy, Z. T., Roelants, P., Denis, L., Breman, F. C., Damiens, D., ... & Van Bortel, W. (2015). Identification of Belgian mosquito species (Diptera: Culicidae) by DNA barcoding. Molecular Ecology Resources, 15(2), 449-457.
    [2] Chen, Y., Why, A., Batista, G., Mafra-Neto, A., & Keogh, E. (2014). Flying insect detection and classification with inexpensive sensors. Journal of visualized experiments: JoVE, (92).
    [3] Ouyang, T. H., Yang, E. C., Jiang, J. A., & Lin, T. T. (2015). Mosquito vector monitoring system based on optical wingbeat classification. Computers and Electronics in Agriculture, 118, 47-55.
    [4] Silva, D. F., Souza, V. M., Ellis, D. P., Keogh, E. J., & Batista, G. E. (2015). Exploring low cost laser sensors to identify flying insect species. Journal of Intelligent & Robotic Systems, 80(1), 313-330.
    [5] Fuchida, M., Pathmakumar, T., Mohan, R. E., Tan, N., & Nakamura, A. (2017). Vision-based perception and classification of mosquitoes using support vector machine. Applied Sciences, 7(1), 51.
    [6] Podpora, M., Korbas, G. P., & Kawala-Janik, A. (2014, October). YUV vs RGB-Choosing a Color Space for Human-Machine Interaction. In FedCSIS Position Papers (pp. 29-34).
    [7] Elgammal, A., Harwood, D., & Davis, L. (2000, June). Non-parametric model for background subtraction. In European conference on computer vision (pp. 751-767). Springer, Berlin, Heidelberg.
    [8] Guo, J. M., & Liu, Y. F. (2008). License plate localization and character segmentation with feedback self-learning and hybrid binarization techniques. IEEE Transactions on Vehicular Technology, 57(3), 1417-1424.
    [9] Zivkovic, Z., & Van Der Heijden, F. (2006). Efficient adaptive density estimation per image pixel for the task of background subtraction. Pattern recognition letters, 27(7), 773-780.
    [10] Bae, S. H., & Yoon, K. J. (2017). Confidence-based data association and discriminative deep appearance learning for robust online multi-object tracking. IEEE transactions on pattern analysis and machine intelligence.
    [11] Berclaz, J., Fleuret, F., Turetken, E., & Fua, P. (2011). Multiple object tracking using k-shortest paths optimization. IEEE transactions on pattern analysis and machine intelligence, 33(9), 1806-1819.
    [12] Ren, S., He, K., Girshick, R., & Sun, J. (2015). Faster r-cnn: Towards real-time object detection with region proposal networks. In Advances in neural information processing systems(pp. 91-99).
    [13] He, K., Gkioxari, G., Dollár, P., & Girshick, R. (2017, October). Mask r-cnn. In Computer Vision (ICCV), 2017 IEEE International Conference on (pp. 2980-2988). IEEE.
    [14] Liao, S., Zhao, G., Kellokumpu, V., Pietikäinen, M., & Li, S. Z. (2010, June). Modeling pixel process with scale invariant local patterns for background subtraction in complex scenes. In Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on (pp. 1301-1306). IEEE.
    [15] Patel, B., & Patel, N. (2012). Motion detection based on multi frame video under surveillance system. International Journal of Computer Science and Network Security (IJCSNS), 12(3), 100.
    [16] Lu, N., Wang, J., Wu, Q. H., & Yang, L. (2008). An Improved Motion Detection Method for Real-Time Surveillance. IAENG International Journal of Computer Science, 35(1).
    [17] Stauffer, C., & Grimson, W. E. L. (1999). Adaptive background mixture models for real-time tracking. In Computer Vision and Pattern Recognition, 1999. IEEE Computer Society Conference on. (Vol. 2, pp. 246-252). IEEE.
    [18] Power, P. W., & Schoonees, J. A. (2002, November). Understanding background mixture models for foreground segmentation. In Proceedings image and vision computing New Zealand (Vol. 2002).
    [19] Zhan, C., Duan, X., Xu, S., Song, Z., & Luo, M. (2007, August). An improved moving object detection algorithm based on frame difference and edge detection. In Image and Graphics, 2007. ICIG 2007. Fourth International Conference on (pp. 519-523). IEEE.
    [20] Henriques, J. F., Caseiro, R., Martins, P., & Batista, J. (2015). High-speed tracking with kernelized correlation filters. IEEE Transactions on Pattern Analysis and Machine Intelligence, 37(3), 583-596.
    [21] Tao, M., Bai, J., Kohli, P., & Paris, S. (2012, May). SimpleFlow: A Non‐iterative, Sublinear Optical Flow Algorithm. In Computer Graphics Forum (Vol. 31, No. 2pt1, pp. 345-353). Blackwell Publishing Ltd.
    [22] Kalal, Z., Mikolajczyk, K., & Matas, J. (2012). Tracking-learning-detection. IEEE transactions on pattern analysis and machine intelligence, 34(7), 1409-1422.
    [23] Hua, C., Wu, H., Chen, Q., & Wada, T. (2006). K-means Tracker: A General Algorithm for Tracking People. Journal of Multimedia, 1(4), 46-53.
    [24] Kalal, Z., Mikolajczyk, K., & Matas, J. (2010, August). Forward-backward error: Automatic detection of tracking failures. In Pattern recognition (ICPR), 2010 20th international conference on (pp. 2756-2759). IEEE.
    [25] Kingma, D. P., & Ba, J. (2014). Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.
    [26] Sharif Razavian, A., Azizpour, H., Sullivan, J., & Carlsson, S. (2014). CNN features off-the-shelf: an astounding baseline for recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition workshops (pp. 806-813).
    [27] Liang, M., & Hu, X. (2015). Recurrent convolutional neural network for object recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 3367-3375).
    [28] Hong, S., You, T., Kwak, S., & Han, B. (2015, June). Online tracking by learning discriminative saliency map with convolutional neural network. In International Conference on Machine Learning (pp. 597-606).\
    [29] Shi, W., Caballero, J., Huszár, F., Totz, J., Aitken, A. P., Bishop, R., ... & Wang, Z. (2016). Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 1874-1883).
    [30] Jiang, H., & Learned-Miller, E. (2017, May). Face detection with the faster R-CNN. In Automatic Face & Gesture Recognition (FG 2017), 2017 12th IEEE International Conference on (pp. 650-657). IEEE.
    [31] Zhu, C., Zheng, Y., Luu, K., & Savvides, M. (2017). CMS-RCNN: contextual multi-scale region-based CNN for unconstrained face detection. In Deep Learning for Biometrics (pp. 57-79). Springer, Cham.
    [32] Sun, X., Wu, P., & Hoi, S. C. (2017). Face detection using deep learning: An improved faster rcnn approach. arXiv preprint arXiv:1701.08289.

    下載圖示
    QR CODE