簡易檢索 / 詳目顯示

研究生: 林鈺庭
Lin, Yu-Ting
論文名稱: 應用於視覺敏銳度預測之彈性化插入式模組
A Flexible Plug-in Module for Visual Acuity Prediction
指導教授: 林政宏
Lin, Cheng-Hung
口試委員: 謝易庭
Hsieh, Yi-Ting
賴穎暉
Lai, Ying-Hui
林政宏
Lin, Cheng-Hung
口試日期: 2022/08/29
學位類別: 碩士
Master
系所名稱: 電機工程學系
Department of Electrical Engineering
論文出版年: 2022
畢業學年度: 110
語文別: 中文
論文頁數: 76
中文關鍵詞: 黃斑前膜精細視覺分類光學同調斷層掃描影像光學同調斷層掃描血管造影影像視網膜厚度圖視覺敏銳度預測
英文關鍵詞: epiretinal membrane, fine-grained visual classification, optical coherence tomography images, optical coherence tomography angiography images, retinal thickness maps, visual acuity prediction
DOI URL: http://doi.org/10.6345/NTNU202201449
論文種類: 學術論文
相關次數: 點閱:56下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 黃斑前膜(ERM)是一種由纖維膜覆蓋黃斑部引起的慢性眼疾,其臨床表現取決於膜的厚度及收縮能力,輕則毫無病徵,重則恐致中心視覺永久性喪失。手術是ERM唯一的治療方式,目前國際尚無制定最佳手術時機的標準,一般以患者術前視覺敏銳度(俗稱視力)受損的程度,評估其術後恢復的成效。有鑑於視網膜影像種類繁多、結構複雜多變、病灶隱蔽不易察覺,單憑我們肉眼觀察難以立即區辨視力的細微變化。近年來,深度學習在視網膜疾病檢測中的應用日趨成熟,但在疾病檢測的子類別中,與視力預測相關的研究卻寥寥無幾。

    因此,我們提出一個易於應用在各式骨幹網路的插入式模組,它結合了背景分割和特徵融合技術,有效整合骨幹網路每一區塊的輸出特徵。在僅有影像層級註釋的情況下,利用光學同調斷層掃描(OCT)、光學同調斷層掃描血管造影(OCTA)影像及視網膜厚度(RT)圖來預測ERM患者的視力。實驗結果表明OCT、OCTA影像及RT圖,其整體十折交叉驗證的平均準確度分別達80.558% (+8.176%)、81.502% (+5.502%)、82.534% (+7.112%)。隨後便利用Grad-CAM進行特徵視覺化,以輔助醫師及時診斷並採取適當措施。

    Epiretinal membrane (ERM) is a chronic eye disease caused by the fibrous membrane covering the macula. Its clinical manifestations depend on the thickness and contractility of the membrane, ranging from asymptomatic to permanent loss of central vision. Surgery is the only treatment for ERM. Currently, there is no international standard for the optimal timing of surgery. Generally, the extent of preoperative visual impairment of the patient is used to evaluate the effect of postoperative recovery. In view of the wide variety of retinal images, the complex and changeable structure, and the hidden and difficult to detect lesions, it is hard to immediately distinguish subtle changes in visual acuity with naked eye alone. In recent years, the application of deep learning in retinal disease detection has become more and more mature, but in the subcategory of disease detection, there are few studies related to visual acuity prediction.

    Therefore, we propose a plug-in module that is easy to apply to various backbone networks. It combines background segmentation and feature fusion techniques to effectively integrate the output features of each block of the backbone network. In the case of only image-level annotation, optical coherence tomography (OCT), optical coherence tomography angiography (OCTA) images, and retinal thickness (RT) maps were used to predict visual acuity in patients with ERM. The experimental results show that the mean accuracy of the overall 10-fold cross validation of OCT, OCTA images and RT Maps is 80.558% (+8.176%), 81.502% (+5.502%), and 82.534% (+7.112%). Then, Grad-CAM is used for feature visualization to assist physicians in making timely diagnosis and taking appropriate measures.

    謝  辭 i 中文摘要 iii 英文摘要 v 目  錄 vii 表 目 錄 ix 圖 目 錄 xi 符號說明 xiii 第一章 緒論 1 1.1 研究背景 2 1.2 研究動機 4 1.3 研究目的 6 1.4 研究方法 6 1.5 研究貢獻 8 1.6 論文架構 9 第二章 文獻探討 11 2.1 弱監督精細視覺分類 12 2.1.1 Localization-based Methods 13 2.1.2 Attention-based Methods 14 2.1.3 Transformer-based Methods 20 2.2 深度學習在視網膜影像分析中的應用 23 2.3 插入式模組(PIM) 25 第三章 研究方法 27 3.1 模組設計 27 3.1.1 弱監督選擇器 28 3.1.2 特徵融合 30 3.2 損失函數(Loss Function) 33 第四章 實驗結果 37 4.1 實驗配置 37 4.1.1 資料集的介紹與劃分 37 4.1.2 驗證指標 44 4.1.3 資料增強 45 4.1.4 訓練細節 46 4.2 消融實驗(Ablation Studies) 47 4.3 分析與討論 50 4.3.1 骨幹網路的選擇 50 4.3.2 預訓練模型的影響 51 4.3.3 選擇器閾值的影響 53 4.3.4 特徵視覺化 54 第五章 結論與未來展望 59 5.1 結論 59 5.2 未來展望 59 參考文獻 61 附 錄 一 73 附 錄 二 75

    [1] H. Kolb, "Simple anatomy of the retina by helga kolb," Webvision: The Organization of the Retina and Visual System, 2011.
    [2] M. Inoue and K. Kadonosono, "Macular diseases: Epiretinal membrane," Microincision Vitrectomy Surgery, vol. 54, pp. 159-163, 2014.
    [3] Q. You, L. Xu, and J. Jonas, "Prevalence and associations of epiretinal membranes in adult Chinese: The Beijing eye study," Eye, vol. 22, no. 7, pp. 874-879, 2008.
    [4] C. H. Ng et al., "Prevalence and risk factors for epiretinal membranes in a multi-ethnic United States population," Ophthalmology, vol. 118, no. 4, pp. 694-699, 2011.
    [5] N. Cheung et al., "Prevalence and risk factors for epiretinal membrane: The Singapore epidemiology of eye disease study," British Journal of Ophthalmology, vol. 101, no. 3, pp. 371-376, 2017.
    [6] J. D. M. Gass, "Stereoscopic atlas of macular diseases," Diagnosis and Treatment, 1987.
    [7] S.-C. Bu, R. Kuijer, X.-R. Li, J. M. Hooymans, and L. I. Los, "Idiopathic epiretinal membrane," Retina, vol. 34, no. 12, pp. 2317-2335, 2014.
    [8] J. C. Folk, R. A. Adelman, C. J. Flaxel, L. Hyman, J. S. Pulido, and T. W. Olsen, "Idiopathic epiretinal membrane and vitreomacular traction preferred practice pattern® guidelines," Ophthalmology, vol. 123, no. 1, pp. P152-P181, 2016.
    [9] J. T. Thompson, "Epiretinal membrane removal in eyes with good visual acuities," Retina, vol. 25, no. 7, pp. 875-882, 2005.
    [10] M. Amsler, "Earliest symptoms of diseases of the macula," The British Journal of Ophthalmology, vol. 37, no. 9, p. 521, 1953.
    [11] J. R. Wilkins et al., "Characterization of epiretinal membranes using optical coherence tomography," Ophthalmology, vol. 103, no. 12, pp. 2142-2151, 1996.
    [12] W. Stevenson, C. M. Prospero Ponce, D. R. Agarwal, R. Gelman, and J. B. Christoforidis, "Epiretinal membrane: Optical coherence tomography-based diagnosis and classification," (in eng), Clin Ophthalmol, vol. 10, pp. 527-534, 2016, doi: 10.2147/OPTH.S97722.
    [13] P. A. Keane and S. R. Sadda, "Predicting visual outcomes for macular disease using optical coherence tomography," Saudi Journal of Ophthalmology, vol. 25, no. 2, pp. 145-158, 2011/04/01/ 2011.
    [14] W. A. Samara et al., "Quantification of diabetic macular ischemia using optical coherence tomography angiography and its relationship with visual acuity," Ophthalmology, vol. 124, no. 2, pp. 235-244, 2017.
    [15] K. Chalam and K. Sambhav, "Optical coherence tomography angiography in retinal diseases," Journal of Ophthalmic & Vision Research, vol. 11, no. 1, p. 84, 2016.
    [16] Y. LeCun, Y. Bengio, and G. Hinton, "Deep learning," Nature, vol. 521, no. 7553, pp. 436-444, 2015.
    [17] V. Gulshan et al., "Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs," Jama, vol. 316, no. 22, pp. 2402-2410, 2016.
    [18] R. Gargeya and T. Leng, "Automated identification of diabetic retinopathy using deep learning," Ophthalmology, vol. 124, no. 7, pp. 962-969, 2017.
    [19] X. Chen, Y. Xu, D. W. K. Wong, T. Y. Wong, and J. Liu, "Glaucoma detection based on deep convolutional neural network," in 2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), 2015: IEEE, pp. 715-718.
    [20] U. Raghavendra, H. Fujita, S. V. Bhandary, A. Gudigar, J. H. Tan, and U. R. Acharya, "Deep convolution neural network for accurate diagnosis of glaucoma using digital fundus images," Information Sciences, vol. 441, pp. 41-49, 2018.
    [21] F. Grassmann et al., "A deep learning algorithm for prediction of age-related eye disease study severity scale for age-related macular degeneration from color fundus photography," Ophthalmology, vol. 125, no. 9, pp. 1410-1420, 2018.
    [22] M. H. Suh, J. M. Seo, K. H. Park, and H. G. Yu, "Associations between macular findings by optical coherence tomography and visual outcomes after epiretinal membrane removal," American Journal of Ophthalmology, vol. 147, no. 3, pp. 473-480. e3, 2009.
    [23] M. Inoue et al., "Preoperative inner segment/outer segment junction in spectral-domain optical coherence tomography as a prognostic factor in epiretinal membrane surgery," Retina, vol. 31, no. 7, pp. 1366-1372, 2011.
    [24] K. He, X. Zhang, S. Ren, and J. Sun, "Deep residual learning for image recognition," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 770-778.
    [25] M. Tan and Q. Le, "Efficientnet: Rethinking model scaling for convolutional neural networks," in International Conference on Machine Learning, 2019: PMLR, pp. 6105-6114.
    [26] A. Dosovitskiy et al., "An image is worth 16x16 words: Transformers for image recognition at scale," p. arXiv:2010.11929.
    [27] Z. Liu et al., "Swin transformer: Hierarchical vision transformer using shifted windows," in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 10012-10022.
    [28] T.-Y. Lin, P. Dollár, R. Girshick, K. He, B. Hariharan, and S. Belongie, "Feature pyramid networks for object detection," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 2117-2125.
    [29] T. N. Kipf and M. Welling, "Semi-supervised classification with graph convolutional networks," p. arXiv:1609.02907.
    [30] P. Velickovic, G. Cucurull, A. Casanova, A. Romero, P. Lio, and Y. Bengio, "Graph attention networks," Stat, vol. 1050, p. 20, 2017.
    [31] S. Raschka, "Model evaluation, model selection, and algorithm selection in machine learning," pp. 24-26, arXiv:1811.12808.
    [32] R. Kohavi, "A study of cross-validation and bootstrap for accuracy estimation and model selection," in IJCAI, 1995.
    [33] B. Zhou, A. Khosla, A. Lapedriza, A. Oliva, and A. Torralba, "Learning deep features for discriminative localization," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 2921-2929.
    [34] R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D. Batra, "Grad-cam: Visual explanations from deep networks via gradient-based localization," in Proceedings of the IEEE International Conference on Computer Vision, 2017, pp. 618-626.
    [35] P.-Y. Chou, C.-H. Lin, and W.-C. Kao, "A novel plug-in module for fine-grained visual classification," p. arXiv:2202.03822.
    [36] C. Wah, S. Branson, P. Welinder, P. Perona, and S. Belongie. The caltech-ucsd birds-200-2011 dataset.
    [37] G. V. Horn et al., "Building a bird recognition app and large scale dataset with citizen scientists: The fine print in fine-grained dataset collection," in 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015, pp. 595-604, doi: 10.1109/CVPR.2015.7298658.
    [38] A. Khosla, N. Jayadevaprakash, B. Yao, and L. Fei-Fei, "Novel dataset for fine-grained image categorization: Stanford dogs," 2012.
    [39] J. Krause, M. Stark, J. Deng, and L. Fei-Fei, "3D object representations for fine-grained categorization," in 2013 IEEE International Conference on Computer Vision Workshops, 2013, pp. 554-561, doi: 10.1109/ICCVW.2013.77.
    [40] N. Zhang, J. Donahue, R. Girshick, and T. Darrell, "Part-based r-cnns for fine-grained category detection," in European Conference on Computer Vision, 2014: Springer, pp. 834-849.
    [41] S. Branson, G. Van Horn, S. Belongie, and P. Perona, "Bird species categorization using pose normalized deep convolutional nets," arXiv preprint arXiv:1406.2952, 2014.
    [42] H. Bilen and A. Vedaldi, "Weakly supervised deep detection networks," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 2846-2854.
    [43] A. Diba, V. Sharma, A. Pazandeh, H. Pirsiavash, and L. Van Gool, "Weakly supervised cascaded convolutional networks," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 914-922.
    [44] P. Tang, X. Wang, X. Bai, and W. Liu, "Multiple instance detection network with online instance classifier refinement," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 2843-2851.
    [45] X. He and Y. Peng, "Weakly supervised learning of part selection model with spatial constraints for fine-grained image classification," in Thirty-first AAAI Conference on Artificial Intelligence, 2017.
    [46] W. Ge, X. Lin, and Y. Yu, "Weakly supervised complementary parts models for fine-grained image classification from the bottom up," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 3034-3043.
    [47] K. He, G. Gkioxari, P. Dollár, and R. Girshick, "Mask r-cnn," in Proceedings of the IEEE International Conference on Computer Vision, 2017, pp. 2961-2969.
    [48] C. Szegedy et al., "Going deeper with convolutions," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015, pp. 1-9.
    [49] S. Hochreiter and J. Schmidhuber, "Long short-term memory," Neural Computation, vol. 9, no. 8, pp. 1735-1780, 1997.
    [50] Z. Yang, T. Luo, D. Wang, Z. Hu, J. Gao, and L. Wang, "Learning to navigate for fine-grained classification," in Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 420-435.
    [51] C. Liu, H. Xie, Z.-J. Zha, L. Ma, L. Yu, and Y. Zhang, "Filtration and distillation: Enhancing region attention for fine-grained visual categorization," in Proceedings of the AAAI Conference on Artificial Intelligence, 2020, vol. 34, no. 07, pp. 11555-11562.
    [52] S. Ren, K. He, R. Girshick, and J. Sun, "Faster r-cnn: Towards real-time object detection with region proposal networks," Advances in Neural Information Processing Systems, vol. 28, 2015.
    [53] K. Xu et al., "Show, attend and tell: Neural image caption generation with visual attention," in International Conference on Machine Learning, 2015: PMLR, pp. 2048-2057.
    [54] V. Mnih, N. Heess, and A. Graves, "Recurrent models of visual attention," Advances in Neural Information Processing Systems, vol. 27, 2014.
    [55] P. Sermanet, A. Frome, and E. Real, "Attention for fine-grained categorization," arXiv preprint arXiv:1412.7054, 2014.
    [56] Z. Li, Y. Yang, X. Liu, F. Zhou, S. Wen, and W. Xu, "Dynamic computational time for visual attention," in Proceedings of the IEEE International Conference on Computer Vision Workshops, 2017, pp. 1199-1209.
    [57] X. Liu, T. Xia, J. Wang, Y. Yang, F. Zhou, and Y. Lin, "Fully convolutional attention networks for fine-grained recognition," arXiv preprint arXiv:1603.06765, 2016.
    [58] R. S. Sutton, D. McAllester, S. Singh, and Y. Mansour, "Policy gradient methods for reinforcement learning with function approximation," Advances in Neural Information Processing Systems, vol. 12, 1999.
    [59] M.-T. Luong, H. Pham, and C. D. Manning, "Effective approaches to attention-based neural machine translation," arXiv preprint arXiv:1508.04025, 2015.
    [60] J. Fu, H. Zheng, and T. Mei, "Look closer to see better: Recurrent attention convolutional neural network for fine-grained image recognition," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 4438-4446.
    [61] T. Hu, H. Qi, Q. Huang, and Y. Lu, "See better before looking closer: Weakly supervised data augmentation network for fine-grained visual classification," arXiv preprint arXiv:1901.09891, 2019.
    [62] Y. Ding et al., "Ap-cnn: Weakly supervised attention pyramid convolutional neural network for fine-grained visual classification," IEEE Transactions on Image Processing, vol. 30, pp. 2826-2836, 2021.
    [63] H. Zheng, J. Fu, T. Mei, and J. Luo, "Learning multi-attention convolutional neural network for fine-grained image recognition," in Proceedings of the IEEE International Conference on Computer Vision, 2017, pp. 5209-5217.
    [64] Y. Rao, G. Chen, J. Lu, and J. Zhou, "Counterfactual attention learning for fine-grained visual categorization and re-identification," in 2021 IEEE/CVF International Conference on Computer Vision (ICCV), 2021, pp. 1005-1014, doi: 10.1109/ICCV48922.2021.00106.
    [65] A. Behera, Z. Wharton, P. R. Hewage, and A. Bera, "Context-aware attentional pooling (cap) for fine-grained visual classification," in Proceedings of the AAAI Conference on Artificial Intelligence, 2021, vol. 35, no. 2, pp. 929-937.
    [66] P. Zhuang, Y. Wang, and Y. Qiao, "Learning attentive pairwise interaction for fine-grained classification," in Proceedings of the AAAI Conference on Artificial Intelligence, 2020, vol. 34, no. 07, pp. 13130-13137.
    [67] M. Sun, Y. Yuan, F. Zhou, and E. Ding, "Multi-attention multi-class constraint for fine-grained image recognition," in Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 805-821.
    [68] A. Vaswani et al., "Attention is all you need," Advances in Neural Information Processing Systems, vol. 30, 2017.
    [69] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, "Bert: Pre-training of deep bidirectional transformers for language understanding," arXiv preprint arXiv:1810.04805, 2018.
    [70] Z. Yang, Z. Dai, Y. Yang, J. Carbonell, R. R. Salakhutdinov, and Q. V. Le, "Xlnet: Generalized autoregressive pretraining for language understanding," Advances in Neural Information Processing Systems, vol. 32, 2019.
    [71] Z. Dai, Z. Yang, Y. Yang, J. Carbonell, Q. V. Le, and R. Salakhutdinov, "Transformer-xl: Attentive language models beyond a fixed-length context," arXiv preprint arXiv:1901.02860, 2019.
    [72] J. He et al., "Transfg: A transformer architecture for fine-grained recognition," in Proceedings of the AAAI Conference on Artificial Intelligence, 2022, vol. 36, no. 1, pp. 852-860.
    [73] G. Kobayashi, T. Kuribayashi, S. Yokoi, and K. Inui, "Attention is not only a weight: Analyzing transformers with vector norms," arXiv preprint arXiv:2004.10102, 2020.
    [74] J. Wang, X. Yu, and Y. Gao, "Feature fusion vision transformer for fine-grained visual categorization," p. arXiv:2107.02341.
    [75] Y. Hu et al., "Rams-trans: Recurrent attention multi-scale transformer for fine-grained image recognition," in Proceedings of the 29th ACM International Conference on Multimedia, 2021, pp. 4239-4248.
    [76] Y. Zhang et al., "A free lunch from ViT: Adaptive attention multi-scale fusion transformer for fine-grained visual recognition," in ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2022: IEEE, pp. 3234-3238.
    [77] M. Pekala, N. Joshi, T. A. Liu, N. M. Bressler, D. C. DeBuc, and P. Burlina, "Deep learning based retinal OCT segmentation," Computers in Biology and Medicine, vol. 114, p. 103445, 2019.
    [78] J. Lauermann, M. Treder, M. Alnawaiseh, C. Clemens, N. Eter, and F. Alten, "Automated OCT angiography image quality assessment using a deep learning algorithm," Graefe's Archive for Clinical and Experimental Ophthalmology, vol. 257, no. 8, pp. 1641-1648, 2019.
    [79] Y.-C. Lo et al., "Epiretinal membrane detection at the ophthalmologist level using deep learning of optical coherence tomography," Scientific Reports, vol. 10, no. 1, p. 8424, 2020/05/21 2020, doi: 10.1038/s41598-020-65405-2.
    [80] E. Parra-Mora, A. Cazañas-Gordon, R. Proença, and L. A. d. S. Cruz, "Epiretinal membrane detection in optical coherence tomography retinal images using deep learning," IEEE Access, vol. 9, pp. 99201-99219, 2021, doi: 10.1109/ACCESS.2021.3095655.
    [81] E. Shao et al., "Artificial intelligence-based detection of epimacular membrane from color fundus photographs," Scientific Reports, vol. 11, no. 1, p. 19291, 2021, doi: 10.1038/s41598-021-98510-x.
    [82] Yun Hsia, Yu-Yi Lin, Bo-Sin Wang, Chung-Yen Su, Yi-Ting Hsieh, and Y.-H. Lai, "Prediction for visual impairment in epiretinal membrane and feature analysis - A deep learning approach using optical coherence tomography."
    [83] T.-C. Yeh, Y.-C. Ko, Y.-B. Chou, S.-J. Chen, A.-C. Luo, and Y.-S. Deng, "Predicting visual outcome in patients with idiopathic epiretinal membrane using a novel convolutional neural network," Invest Ophthalmol Vis Sci, vol. 63, no. 7, pp. 2083–F0072-2083–F0072, 2022.
    [84] J. H. Kim et al., "A deep learning ensemble method to visual acuity measurement using fundus images," Applied Sciences, vol. 12, no. 6, p. 3190, 2022.
    [85] M. G. Kawczynski, T. Bengtsson, J. Dai, J. J. Hopkins, S. S. Gao, and J. R. Willis, "Development of deep learning models to predict best-corrected visual acuity from optical coherence tomography," Translational Vision Science & Technology, vol. 9, no. 2, pp. 51-51, 2020.
    [86] N. Kumar, A. C. Berg, P. N. Belhumeur, and S. K. Nayar, "Attribute and simile classifiers for face verification," in 2009 IEEE 12th International Conference on Computer Vision, 2009: IEEE, pp. 365-372.
    [87] H. Lee, R. Grosse, R. Ranganath, and A. Y. Ng, "Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations," in Proceedings of the 26th Annual International Conference on Machine Learning, 2009, pp. 609-616.
    [88] M. R. Hee et al., "Optical coherence tomography of the human retina," Archives of Ophthalmology, vol. 113, no. 3, pp. 325-332, 1995.
    [89] S. Aumann, S. Donner, J. Fischer, and F. Müller, "Optical coherence tomography (OCT): Principle and technical realization," High Resolution Imaging in Microscopy and Ophthalmology, pp. 59-85, 2019.
    [90] R. F. Spaide, J. G. Fujimoto, N. K. Waheed, S. R. Sadda, and G. Staurenghi, "Optical coherence tomography angiography," Progress in Retinal and Eye Research, vol. 64, pp. 1-55, 2018.
    [91] J. I. Lim, "Clinical experience with fourier-domain OCT," Retina Today, pp. 68-70, 2008.
    [92] C. Shorten and T. M. Khoshgoftaar, "A survey on image data augmentation for deep learning," Journal of Big Data, vol. 6, no. 1, p. 60, 2019, doi: 10.1186/s40537-019-0197-0.
    [93] Z. Hussain, F. Gimenez, D. Yi, and D. Rubin, "Differential data augmentation techniques for medical imaging classification tasks," in AMIA Annual Symposium Proceedings, 2017, vol. 2017: American Medical Informatics Association, p. 979.
    [94] A. Mikołajczyk and M. Grochowski, "Data augmentation for improving deep learning in image classification problem," in 2018 International Interdisciplinary PhD Workshop (IIPhDW), 2018, pp. 117-122, doi: 10.1109/IIPHDW.2018.8388338.
    [95] I. Loshchilov and F. Hutter, "Decoupled weight decay regularization," p. arXiv:1711.05101.
    [96] P. Goyal et al., "Accurate, large minibatch SGD: Training imagenet in 1 hour," p. arXiv:1706.02677.
    [97] I. Loshchilov and F. Hutter, "SGDR: Stochastic gradient descent with warm restarts," p. arXiv:1608.03983.
    [98] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, "Imagenet: A large-scale hierarchical image database," in 2009 IEEE Conference on Computer Vision and Pattern Recognition, 2009: Ieee, pp. 248-255.
    [99] M. Raghu, C. Zhang, J. Kleinberg, and S. Bengio, "Transfusion: Understanding transfer learning for medical imaging," Advances in Neural Information Processing Systems, vol. 32, 2019.
    [100] L. Alzubaidi et al., "Novel transfer learning approach for medical imaging with limited labeled data," Cancers, vol. 13, no. 7, p. 1590, 2021.
    [101] T. K. Yoo, J. Y. Choi, and H. K. Kim, "Feasibility study to improve deep learning in OCT diagnosis of rare retinal diseases with few-shot classification," Medical & Biological Engineering & Computing, vol. 59, no. 2, pp. 401-415, 2021.

    無法下載圖示 電子全文延後公開
    2025/09/30
    QR CODE