簡易檢索 / 詳目顯示

研究生: 王景用
Wang, Jing-Yong
論文名稱: 噪聲學習:漸進式的樣本選擇
Dynamic Sample Selection for Learning with Noisy Labels
指導教授: 葉梅珍
Yeh, Mei-Chen
口試委員: 王鈺強
Wang, Yu-Chiang
康立威
Kang, Li-Wei
葉梅珍
Yeh, Mei-Chen
口試日期: 2023/07/24
學位類別: 碩士
Master
系所名稱: 資訊工程學系
Department of Computer Science and Information Engineering
論文出版年: 2023
畢業學年度: 111
語文別: 中文
論文頁數: 37
中文關鍵詞: 深度學習圖像分類半監督式學習噪聲標籤
英文關鍵詞: Deep learning, Image classification, Semi-supervised learning, Noisy label
研究方法: 實驗設計法
DOI URL: http://doi.org/10.6345/NTNU202301265
論文種類: 學術論文
相關次數: 點閱:86下載:8
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 在人工智慧蓬勃發展的年代,深度學習技術在不同的影像辨識工作中,都取得不錯的成果,然而這些計算模型的訓練任務往往都是建立在乾淨資料集上做的實驗。然而創建一個乾淨大型資料集往往都需要龐大的標注成本,甚至在一些大型的開源資料集中也有一些人為的標記錯誤出現。
    為了降低建構資料集的成本以及錯誤標籤對模型的影響,噪聲學習主要研究如何在有標記錯誤的資料集中訓練出穩定可用的模型。在過去的研究中,篩選乾淨樣本的技術,如高斯混合模型或是JS散度技術,都無法準確將所有的乾淨樣本篩選出來。因此,本文從模型預測穩定度的觀點,結合過去相關研究中加入KNN演算法,利用模型預測的穩定度與樣本特徵的相似度進行多階段的篩選。參考近期論文的設計,在雙模型架構設計下,我們發現在訓練前期KNN模型的預測能力比雙模型的預測能力還要差。為了有效利用雙模型的預測結果和KNN模型,我們用模型預測穩定度的指標,漸進式的使用KNN模型,幫助我們過濾出乾淨標籤以及噪聲樣本。實驗結果可以看到我們的方法在不同的噪聲類型、不同的噪聲率下都能有不錯的表現,證明我們方法的有效性。

    With the development of technology, deep learning has promising results in different computer vision problems. However, the training tasks of these models are based on clean datasets. Research teams need to pay a lot of efforts to create a clean and large dataset. Even some large open source datasets contain human labeling errors.
    To reduce the cost of data labeling and the impact of wrong labels, learning with noisy labels mainly studies how to train a stable and usable model in a dataset with incorrect and noisy labels. In the past, techniques such as Gaussian mixture models and JS divergence have been proposed but cannot accurately select all clean samples. From the perspective of model prediction stability, we follow previous papers that use the KNN algorithm to perform a multi-stage selection. In particular, we apply the architecture of two networks, and find that the predictive ability of the KNN model in the early stage of training is worse than that of the two- networks. To effectively use the prediction results of both methods, we consider model prediction stability and gradually use the KNN model to separate the samples into clean and noisy sets. The experimental results on three benchmarks show that our proposed method can outperform state-of-the-art methods under different noise types and different noise rates, proving the effectiveness of our method.

    第一章 緒論 1 1.1 研究背景 1 1.2 研究動機 2 1.3 研究架構 3 第二章 文獻探討 5 2.1 Architecture 5 2.2 Warm-up 6 2.3 Sample selection 7 2.4 Relabeling 8 2.5 Training with clean and noisy data 10 第三章 方法與步驟 12 3.1 Model 12 3.2 Warm-up 13 3.3 Sample selection 13 3.4 Training 17 3.5 Loss function 18 3.6 Contrastive learning 19 第四章 實驗結果 21 4.1 Dataset 21 4.2 實驗細節 23 4.3 實驗 24 第五章 結論 33 參考文獻 34

    [1]Li, Junnan and Socher, Richard and Hoi, Steven C. H. DivideMix: Learning with Noisy Labels as Semi-supervised Learning. In International Conference on Learning Representations. arXiv preprint arXiv: 2002.07394, 2020.
    [2]Tanaka, Daiki and Ikami, Daiki and Yamasaki, Toshihiko and Aizawa, Kiyoharu. Joint Optimization Framework for Learning with Noisy Labels. In Computer Vision and Pattern Recognition, pages 5550-5552. arXiv preprint arXiv: 1803.11364, 2018.
    [3]Karim, Nazmul and Rizve, Mamshad Nayeem and Rahnavard, Nazanin and Mian, Ajmal and Shah, Mubarak. UNICON: Combating Label Noise Through Uniform Selection and Contrastive Learning. In Computer Vision and Pattern Recognition. arXiv preprint arXiv: 2203.14542, 2022.
    [4]Hongyi Zhang, Moustapha Cisse, Yann N. Dauphin, and David Lopez-Paz. mixup: Beyond empirical risk minimization, 2018.
    [5]Evgenii Zheltonozhskii, Chaim Baskin, Avi Mendelson, Alex M. Bronstein, Or Litany. Contrast to Divide: self-supervised pre-training for learning with noisy labels. In International Conference on Learning Representations, 2021. URL: https://openreview.net/pdf?id=uB5x7Y2qsFR.
    [6]Ting Chen, Ting and Kornblith, Simon and Norouzi, Mohammad and Hinton, Geoffrey. A Simple Framework for Contrastive Learning of Visual Representations. In International Conference on Machine Learning. arXiv preprint arXiv: 2002.05709, 2020.
    [7]Wei, Hongxin and Feng, Lei and Chen, Xiangyu and An, Bo. Combating noisy labels by agreement: A joint training method with co-regularization. In Computer Vision and Pattern Recognition. arXiv preprint arXiv: 2003.02752, 2020.
    [8]Huang, Jingjia and Chen, Yuanqi and Feng, Jiashi and Wu, Xinglong. Class Prototype-based Cleaner for Label Noise Learning. arXiv preprint arXiv: 2212.10766, 2022.
    [9]Feng, Chen and Tzimiropoulos, Georgios and Patras, Ioannis. SSR: An Efficient and Robust Framework for Learning with Unknown Label Noise. In British Machine Vision Conference. arXiv preprint arXiv: 2111.11288, 2021.
    [10]Arpit, Devansh and Jastrzębski, Stanisław and Ballas, Nicolas and Krueger, David and Bengio, Emmanuel and Kanwal, Maxinder S. and Maharaj, Tegan and Fischer, Asja and Courville, Aaron and Bengio, Yoshua and Lacoste-Julien, Simon. A Closer Look at Memorization in Deep Networks. In International Conference on Machine Learning. arXiv preprint arXiv: 1706.05394, 2017.
    [11]Nguyen, Duc Tam and Mummadi, Chaithanya Kumar and Ngo, Thi Phuong Nhung and Nguyen, Thi Hoai Phuong and Beggel, Laura and Brox, Thomas. SELF: Learning to Filter Noisy Labels with Self-Ensembling. arXiv preprint arXiv: 1910.01842, 2019.
    [12]Jiang, Lu and Huang, Di and Liu, Mason and Yang, Weilong. Beyond Synthetic Noise: Deep Learning on Controlled Noisy Labels. In International Conference on Machine Learning. arXiv preprint arXiv: 1911.09781, 2019.
    [13]Ren, Mengye and Zeng, Wenyuan and Yang, Bin and Urtasun, Raquel. Learning to Reweight Examples for Robust Deep Learning. In International Conference on Machine Learning. arXiv preprint arXiv: 1803.09050, 2018.
    [14]Zhang, Zhilu and Sabuncu, Mert R. Generalized Cross Entropy Loss for Training Deep Neural Networks with Noisy Labels. In Neural Information Processing Systems. arXiv preprint arXiv: 1805.07836, 2018.
    [15]Wang, Yisen and Ma, Xingjun and Chen, Zaiyi and Luo, Yuan and Yi, Jinfeng and Bailey, James. Symmetric Cross Entropy for Robust Learning with Noisy Labels. In International Conference on Computer Vision. arXiv preprint arXiv: 1908.06112, 2019.
    [16]Han, Bo and Yao, Quanming and Yu, Xingrui and Niu, Gang and Xu, Miao and Hu, Weihua and Tsang, Ivor and Sugiyama, Masashi. Co-teaching: Robust Training of Deep Neural Networks with Extremely Noisy Labels. In Neural Information Processing Systems. arXiv preprint arXiv: 1804.06872, 2018.
    [17]Yi, Kun and Wu, Jianxin. Probabilistic End-to-end Noise Correction for Learning with Noisy Labels. In Computer Vision and Pattern Recognition. arXiv preprint arXiv: 1903.07788, 2019.
    [18]Song, Hwanjun and Kim, Minseok and Park, Dongmin and Shin, Yooju and Lee, Jae-Gil. Learning from Noisy Labels with Deep Neural Networks: A Survey. Final version published in TNNLS Journal. arXiv preprint arXiv: 2007.08199, 2020.
    [19]Iscen, Ahmet and Valmadre, Jack and Arnab, Anurag and Schmid, Cordelia. Learning with Neighbor Consistency for Noisy Labels. arXiv preprint arXiv: 2202.02200, 2022.
    [20]Patel, Deep and Sastry, P. S. Adaptive Sample Selection for Robust Learning under Label Noise. In 2023 IEEE/CVF Winter Conference on Applications of Computer Vision. arXiv preprint arXiv: 2106.15292, 2021.
    [21]Li, Shikun and Xia, Xiaobo and Ge, Shiming and Liu, Tongliang. Selective-Supervised Contrastive Learning with Noisy Labels. In Computer Vision and Pattern Recognition. arXiv preprint arXiv: 2203.04181, 2022.
    [22]Zhang, Chiyuan and Bengio, Samy and Hardt, Moritz and Recht, Benjamin and Vinyals, Oriol. Understanding deep learning requires rethinking generalization. In International Conference on Learning Representations. arXiv preprint arXiv: 1611.03530, 2016.
    [23]Mandal, Devraj and Bharadwaj, Shrisha and Biswas, Soma. A Novel Self-Supervised Re-labeling Approach for Training with Noisy Labels. In 2020 IEEE/CVF Winter Conference on Applications of Computer Vision. arXiv preprint arXiv: 1910.05700, 2019.
    [24]Chen, Derek and Yu, Zhou and Bowman, Samuel R. Clean or Annotate: How to Spend a Limited Data Collection Budget. In NAACL 2022 workshop. arXiv: 2110.08355, 2021.
    [25]Köhler, Jan M. and Autenrieth, Maximilian and Beluch, William H. Uncertainty Based Detection and Relabeling of Noisy Image Labels. In Deep Visual Learning Workshop at CVPR 2019. arXiv preprint arXiv: 1906.11876, 2019.
    [26]Kaiming He and Haoqi Fan and Yuxin Wu and Saining Xie and Ross Girshick. Momentum Contrast for Unsupervised Visual Representation Learning. In Computer Vision and Pattern Recognition. arXiv preprint arXiv: 1911.05722, 2020.
    [27]Cordeiro, Filipe R. and Belagiannis, Vasileios and Reid, Ian and Carneiro, Gustavo. PropMix: Hard Sample Filtering and Proportional MixUp for Learning with Noisy Labels. In British Machine Vision Conference. arXiv preprint arXiv: 2110.11809, 2021.
    [28]Jean-Bastien Grill and Florian Strub and Florent Altché and Corentin Tallec and Pierre H. Richemond and Elena Buchatskaya and Carl Doersch and Bernardo Avila Pires and Zhaohan Daniel Guo and Mohammad Gheshlaghi Azar and Bilal Piot and Koray Kavukcuoglu and Rémi Munos and Michal Valko. Bootstrap your own latent: A new approach to self-supervised Learning. In Computer Vision and Pattern Recognition. arXiv preprint arXiv: 2006.07733, 2020.
    [29]Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. 2009.
    [30]Junnan Li, Yongkang Wong, Qi Zhao, and Mohan Kankanhalli. Learning to learn from noisy labeled data, 2019.
    [31]Tong Xiao, Tian Xia, Yi Yang, Chang Huang, and Xiaogang Wang. Learning from massive noisy labeled data for image classification. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2691–2699, 2015.
    [32]Wen Li, Limin Wang, Wei Li, Eirikur Agustsson, and Luc Van Gool. Webvision database: Visual learning and understanding from web data. arXiv preprint arXiv:1708.02862, 2017.
    [33]Alec Radford and Jong Wook Kim and Chris Hallacy and Aditya Ramesh and Gabriel Goh and Sandhini Agarwal and Girish Sastry and Amanda Askell and Pamela Mishkin and Jack Clark and Gretchen Krueger and Ilya Sutskever. Learning Transferable Visual Models From Natural Language Supervision. In International Conference on Machine Learning. arXiv preprint arXiv: 2103.00020, 2021.
    [34]Xuefeng Liang and Longshan Yao and Xingyu Liu and Ying Zhou. Tripartite: Tackle Noisy Labels by a More Precise Partition. In Computer Vision and Pattern Recognition. arXiv preprint arXiv: 2202.09579, 2022.

    下載圖示
    QR CODE