簡易檢索 / 詳目顯示

研究生: 林宜亭
Lin, Yi-Ting
論文名稱: RIDNet深度學習去噪模型的提升:基於網絡結構與損失函數的調整
Improvement of RIDNet deep learning denoising model:Adjustment based on network structure and loss function
指導教授: 樂美亨
Yueh, Mei-Heng
口試委員: 樂美亨
Yueh, Mei-Heng
黃聰明
Huang, Tsung-Ming
郭岳承
Kuo, Yueh-Cheng
口試日期: 2024/06/18
學位類別: 碩士
Master
系所名稱: 數學系
Department of Mathematics
論文出版年: 2024
畢業學年度: 112
語文別: 英文
論文頁數: 48
中文關鍵詞: 影像去噪深度學習卷積神經網絡激活函數損失函數
英文關鍵詞: Image denoising, Deep learning, Convolutional Neural Network, Activation Function, Loss Function
研究方法: 實驗設計法比較研究
DOI URL: http://doi.org/10.6345/NTNU202400999
論文種類: 學術論文
相關次數: 點閱:110下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 自1970年代後期以來,隨著計算機視覺領域和數字影像處理的不斷發展,影像去噪技術也獲得了改善和創新。從最初基於空間域與變換域的濾波器、字典學習和統計模型的方法,到現今基於人工智慧的機器學習技術,可以發現影像去噪的方法日益多樣和精密。儘管許多去噪模型已經取得了相當不錯的成果,但仍然存在一些缺陷,比如需要手動設定參數、優化效果不佳,或者僅適用於特定類型的雜訊等。
    隨著卷積神經網路學習能力的增強和硬體技術的提升,基於深度學習的技術逐漸成為主要的影像去噪方法。卷積網路不僅能處理大量數據,還能進行高效的訓練和學習。然而,一般情況下的雜訊是無法得知的,因此能夠面對真實影像雜訊的盲去噪模型在當今的影像處理中尤其重要。這些模型必須具備強大的自適應能力,能夠有效地從影像中提取出雜訊的特徵並進行有效的去除,而不需要對雜訊進行先驗知識的設定。因此,在本篇論文中,對於盲去噪模型,我們將專注於擁有注意力機制和殘差學習的RIDNet,並對其EAM層數、激活函數及損失函數進行修改,並與其他現有的深度學習模型進行比較,如DnCNN和CBDNet。這些比較將幫助我們更了解模型,並為影像去噪技術進一步提供改善指引。

    Since the late 1970s, with the continuous development in digital image processing and computer vision, image denoising techniques have undergone significant improvements and innovations. From the initial methods based on spatial and transform domain filters, dictionary learning, and statistical models, to the present-day machine learning techniques based on artificial intelligence, the methods for image denoising have become increasingly diverse and sophisticated. Despite the considerable achievements of many denoising models, they still suffer from some drawbacks, such as the need for manual parameter tuning, poor optimization, or applicability limited to specific types of noise.
    With the enhanced learning capabilities of convolutional neural networks (CNNs) and advancements in hardware technology, deep learning-based techniques have gradually become the primary methods for image denoising.Convolutional networks can handle large volumes of data and perform efficient training and learning. However, noise in real-world scenarios is often unknown, making blind denoising models particularly crucial in contemporary image processing. These models must possess robust adaptive capabilities to efficiently extract noise features from images and perform effective denoising without requiring prior knowledge about the noise. Consequently, in this paper, we focus on the RIDNet, which incorporates attention mechanisms and residual learning for blind denoising. We aim to modify its enhancement attention modules (EAM) layer architecture, activation functions (ACT), and loss functions, and compare it with other existing deep learning models such as DnCNN and CBDNet. These comparisons will help us understand the weaknesses and strengths of model and provide further guidance for improving image denoising techniques.

    1 Introduction 1 1.1 ResearchBackground 1 1.2 ResearchObjective 2 2 RelatedWorks 3 2.1 TypesofImageNoise 3 2.1.1 SaltandPepperNoise 3 2.1.2 GaussianNoise 3 2.1.3 PoissonNoise 4 2.1.4 SpeckleNoise 4 2.2 TraditionalandCNNsImageDenoisingMethods 4 2.2.1 TraditionalImageDenoisingMethods 4 2.2.2 CNNsImageDenoisingMethods 6 2.3 TheArchitectureofCNNs 10 2.4 ActivationFunctions(ACT) 15 2.4.1 ThePrimaryPurposeofActivationFunctions 15 2.5 AttentionMechanism 18 2.5.1 OperationofAttentionMechanism 18 2.5.2 ScoreFunctione 18 2.6 LossFunction 19 2.6.1 Mean-SquareError(MSE) 19 2.6.2 Mean-AbsoluteError(MAE) 20 2.6.3 HuberLoss 20 2.7 ImageQualityAssessment(IQA) 21 2.7.1 Mean-SquareError(MSE) 21 2.7.2 PeakSignaltoNoiseRatio(PSNR) 21 2.7.3 StructureSimilarityIndexMethod(SSIM) 21 3 ResearchMethods 23 3.1 Datasets 23 3.2 TheArchitectureoftheRIDNetModel 24 3.2.1 FeatureExtractionLayerMe() 24 3.2.2 FeatureLearningResidualLayerMfl() 25 3.2.3 ReconstructionLayerMr() 26 3.3 ModelTrainingandTesting 26 3.3.1 DatasetsusedforTrainingandTesting 27 3.3.2 TrainingDetails 27 4 ExperimentalResults 28 4.1 OriginalRIDNetModel 28 4.2 LossFunctionSelection 28 4.3 NumberofLayersintheEAMSelection 29 4.4 ACTSelection 31 4.5 ComparisonofPSNRandSSIMforDifferentRIDNetModel 33 4.6 VisualDenoisingonSIDD,RENOIR,PolyUDatasets 37 5 Conclusions 43 Reference 45

    [1] Abdelrahman Abdelhamed, Stephen Lin, and Michael S Brown. A high-quality denoising dataset for smartphone cameras. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1692–1700, 2018.
    [2] Josue Anaya and Adrian Barbu. Renoir–a dataset for real low-light image noise reduction.Journal of Visual Communication and Image Representation, 51:144–154, 2018.
    [3] Saeed Anwar and Nick Barnes. Real image denoising with feature attention. In Proceedings of the IEEE/CVF international conference on computer vision, pages 3155–3164,2019.
    [4] Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473, 2014.
    [5] Amir Beck and Marc Teboulle. Fast gradient-based algorithms for constrained total variation image denoising and deblurring problems. IEEE transactions on image processing, 18(11):2419–2434, 2009.
    [6] Charles Boncelet. Image noise models. In The essential guide to image processing, pages 143–167. Elsevier, 2009.
    [7] Ajay Kumar Boyat and Brijendra Kumar Joshi. A review paper: noise models in digital image processing. arXiv preprint arXiv:1505.03489, 2015.
    [8] Antoni Buades, Bartomeu Coll, and J-M Morel. A non-local algorithm for image denoising. In 2005 IEEE computer society conference on computer vision and pattern recognition (CVPR’05), volume 2, pages 60–65. Ieee, 2005.
    [9] Yunjin Chen and Thomas Pock. Trainable nonlinear reaction diffusion: A flexible frame work for fast and effective image restoration. IEEE transactions on pattern analysis and machine intelligence, 39(6):1256–1272, 2016.
    [10] Kostadin Dabov, Alessandro Foi, Vladimir Katkovnik, and Karen Egiazarian. Image denoising by sparse 3-d transform-domain collaborative filtering. IEEE Transactions on image processing, 16(8):2080–2095, 2007.
    [11] Weisheng Dong, Lei Zhang, Guangming Shi, and Xin Li. Nonlocally centralized sparse representation for image restoration. IEEE transactions on Image Processing, 22(4):1620-1630, 2012.
    [12] Bhawna Goyal, Ayush Dogra, Sunil Agrawal, Balwinder Singh Sohi, and Apoorav Sharma. Image denoising review: From classical to state-of-the-art approaches. Information fusion, 55:220–244, 2020.
    [13] Shuhang Gu, Lei Zhang, Wangmeng Zuo, and Xiangchu Feng. Weighted nuclear norm minimization with application to image denoising. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2862–2869, 2014.
    [14] Shi Guo, Zifei Yan, Kai Zhang, Wangmeng Zuo, and Lei Zhang. Toward convolutional blind denoising of real photographs. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 1712–1722, 2019.
    [15] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.
    [16] Simonyan Karen. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv: 1409.1556, 2014.
    [17] Bekir Karlik and A Vehbi Olgac. Performance analysis of various activation functions in generalized mlp architectures of neural networks. International Journal of Artificial Intelligence and Expert Systems, 1(4):111–122, 2011.
    [18] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems, 25, 2012.
    [19] Xiangyang Lan, Stefan Roth, Daniel Huttenlocher, and Michael J Black. Efficient belief propagation with learned higher-order markov random fields. In Computer Vision–ECCV 2006: 9th European Conference on Computer Vision, Graz, Austria, May 7-13, 2006. Proceedings, Part II 9, pages 269–282. Springer, 2006.
    [20] Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998.
    [21] Bee Lim, Sanghyun Son, Heewon Kim, Seungjun Nah, and Kyoung Mu Lee. Enhanced deep residual networks for single image super-resolution. In Proceedings of the IEEE conference on computer vision and pattern recognition workshops, pages 136–144, 2017.
    [22] Zhaoyang Niu, Guoqiang Zhong, and Hui Yu. A review on the attention mechanism of deep learning. Neurocomputing, 452:48–62, 2021.
    [23] Chigozie Nwankpa, Winifred Ijomah, Anthony Gachagan, and Stephen Marshall. Activation functions: Comparison of trends in practice and research for deep learning. arXiv preprint arXiv:1811.03378, 2018.
    [24] John D Owens, Mike Houston, David Luebke, Simon Green, John E Stone, and James C Phillips. Gpu computing. Proceedings of the IEEE, 96(5):879–899, 2008.
    [25] JS Owotogbe, TS Ibiyemi, and BA Adu. A comprehensive review on various types of noise in image processing. International Journal of Scientific & Engineering Research, 10(11):388–393, 2019.
    [26] Javier Portilla, Vasily Strela, Martin J Wainwright, and Eero P Simoncelli. Image denoising using scale mixtures of gaussians in the wavelet domain. IEEE Transactions on Image processing, 12(11):1338–1351, 2003.
    [27] Prajit Ramachandran, Barret Zoph, and Quoc V Le. Searching for activation functions. arXiv preprint arXiv:1710.05941, 2017.
    [28] Leonid I Rudin, Stanley Osher, and Emad Fatemi. Nonlinear total variation based noise removal algorithms. Physica D: nonlinear phenomena, 60(1-4):259–268, 1992.
    [29] Ling Shao, Ruomei Yan, Xuelong Li, and Yan Liu. From heuristic optimization to dictionary learning: A review and comprehensive comparison of image denoising algorithms. IEEE transactions on cybernetics, 44(7):1001–1013, 2013.
    [30] Jacob Søgaard, Lukáš Krasula, Muhammad Shahid, Dogancan Temel, Kjell Brunnström, and Manzoor Razaak. Applicability of existing objective metrics of perceptual quality for adaptive video streaming. Electronic Imaging, 28:1–7, 2016.
    [31] Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1–9, 2015.
    [32] Chunwei Tian, Lunke Fei, Wenxian Zheng, Yong Xu, Wangmeng Zuo, and Chia-Wen Lin. Deep learning on image denoising: An overview. Neural Networks, 131:251–275, 2020.
    [33] Joseph Turian, James Bergstra, and Yoshua Bengio. Quadratic features and deep architectures for chunking. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, Companion Volume: Short Papers, pages 245–248, 2009.
    [34] Jun Xu, Hui Li, Zhetong Liang, David Zhang, and Lei Zhang. Real-world noisy image denoising: A new benchmark. arXiv preprint arXiv:1804.02603, 2018.
    [35] Ruomei Yan, Ling Shao, and Yan Liu. Nonlocal hierarchical dictionary learning using wavelets for image denoising. IEEE transactions on image processing, 22(12):4689–4698, 2013.
    [36] Kai Zhang, WangmengZuo, YunjinChen, DeyuMeng, andLeiZhang. Beyondagaussian denoiser: Residual learning of deep cnn for image denoising. IEEE transactions on image processing, 26(7):3142–3155, 2017.
    [37] KaiZhang, WangmengZuo,andLeiZhang. Ffdnet: Towardafastandflexiblesolution for cnn-based image denoising. IEEE Transactions on Image Processing, 27(9):4608–4622, 2018.
    [38] Zheng Zhang, Yong Xu, Jian Yang, Xuelong Li, and David Zhang. A survey of sparse representation: algorithms and applications. IEEE access, 3:490–530, 2015.

    無法下載圖示 電子全文延後公開
    2029/07/14
    QR CODE