簡易檢索 / 詳目顯示

研究生: 鐘暘
Chung, Yang
論文名稱: 基於最佳化演算法的類神經網路剪枝策略
Artificial Neural Network Pruning Strategy Based on Optimization Algorithm
指導教授: 林政宏
Lin, Cheng-Hung
口試委員: 陳勇志 賴穎暉 林政宏
口試日期: 2021/09/24
學位類別: 碩士
Master
系所名稱: 電機工程學系
Department of Electrical Engineering
論文出版年: 2021
畢業學年度: 109
語文別: 中文
論文頁數: 21
中文關鍵詞: 深度學習類神經網路網路剪枝
英文關鍵詞: Deep learning, neural network, network pruning
研究方法: 實驗設計法
DOI URL: http://doi.org/10.6345/NTNU202101425
論文種類: 學術論文
相關次數: 點閱:71下載:15
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 隨著深度學習領域不斷的進步,類神經網路的架構比起以往擁有更多的參數量和記憶體的使用量,相對地對於硬體的要求也就更高。如何在有限的記憶體和硬體效能中擁有差不多的辨識效能也成為需要被關注的問題之一,而網路剪枝則是最直接能夠解決參數量過大問題,將網路中不必要的參數刪除,就能夠省去大量的記憶體空間。
    過去在網路剪枝當中,通常的策略都是將較小的權重刪除。這些網路剪枝方法的主要策略都是假設網路裡較小的權重,對於網路本身的影響較小,而可以被捨棄掉。但是我們認為這個假設對於神經網路而言並不是絕對的。
    在本篇論文中我們假設小權重也有可能會是重要權重的可能性,我們提出一個最佳化的剪枝策略,在剪枝時不只留下較大權重,還會留下由最佳化策略所挑選出的較小權重,能證明保留網路中重要的較小權重,有益於剪枝網路的準確率, 讓剪枝網路能夠在低參數量和高準確度中取得最佳的權衡。
    實驗結果說明在相較於只留較大權重的做法,透過最佳化的方法留下的較小權重,在相同的剪枝率網路會有更高的準確度。

    With the continuous progress in the field of deep learning, artificial neural networks require more parameters and memory footprint, as well as higher hardware requirements. How to achieve high recognition accuracy with limited memory and hardware has become a critical issue. Network pruning is the most direct way to solve the problem of excessive parameters. By deleting unnecessary parameters in the network, a lot of memory space can be saved.
    In the proposed methods, the strategies of network pruning mainly consider pruning the small weights. The main strategy of these network pruning methods is to assume that the smaller weights in the network have less impact on the network itself and can be discarded. But we believe that this assumption is not absolute for neural networks.
    In this thesis, we assume that small weights may also be important weights. We propose an optimized pruning strategy that not only leaves larger weights when pruning, but also leaves smaller weights selected by the optimized strategy. During pruning process, not only the larger weights are left, but also the smaller weights selected by the optimization. We treat the retention of parameters in a network model as an optimization problem. Experiment results show that that keeping important small weights in the network is beneficial to the accuracy of the pruning network, so that the pruning network can achieve the best trade-off between low parameter amount and high accuracy

    誌 謝 iv 圖 目 錄 vii 表 目 錄 viii 第一章 緒論 1 1.1 研究背景與動機 1 1.2 研究目的 3 1.3 研究方法概述 3 1.4 研究貢獻 3 1.5 論文架構 4 第二章 文獻探討 5 2.1 依權重值做剪枝 5 2.2 以最佳化做剪枝 5 2.3 小結 6 2.4 以基因演算法做最佳化剪枝 6 2.5基因演算法的限制 9 第三章 研究方法 10 3.1 權重剪枝架構 11 3.2 最佳化剪枝策略 12 第四章 實驗結果 15 4.1實驗 15 第五章 結論與未來展望 17 5.1 結論 17 5.2 未來展望 18 參 考 文 獻 19 自傳 21

    [1] Krizhevsky, Alex, Ilya Sutskever, and Geoffrey E. Hinton. "Imagenet classification with deep convolutional neural networks." Advances in neural information processing systems 25 (2012): 1097-1105.
    [2] Simonyan, Karen, and Andrew Zisserman. "Very deep convolutional networks for large-scale image recognition." arXiv preprint arXiv:1409.1556 (2014).
    [3] He, Kaiming, et al. "Deep residual learning for image recognition." Proceedings of the IEEE conference on computer vision and pattern recognition. 2016.
    [4] Han, Song, et al. "Learning both weights and connections for efficient neural networks." arXiv preprint arXiv:1506.02626 (2015).
    [5] Han, Song, Huizi Mao, and William J. Dally. "Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding." arXiv preprint arXiv:1510.00149 (2015).
    [6] Frankle, Jonathan, and Michael Carbin. "The lottery ticket hypothesis: Finding sparse, trainable neural networks." arXiv preprint arXiv:1803.03635 (2018).
    [7] Li, Hao, et al. "Pruning filters for efficient convnets." arXiv preprint arXiv:1608.08710 (2016).
    [8] Carreira-Perpinán, Miguel A., and Yerlan Idelbayev. "“learning-compression” algorithms for neural net pruning." Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018.
    [9] Yu, Ruichi, et al. "Nisp: Pruning networks using neuron importance score propagation." Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018.
    [10] Luo, Jian-Hao, Jianxin Wu, and Weiyao Lin. "Thinet: A filter level pruning method for deep neural network compression." Proceedings of the IEEE international conference on computer vision. 2017.
    [11] Zhu, Michael, and Suyog Gupta. "To prune, or not to prune: exploring the efficacy of pruning for model compression." arXiv preprint arXiv:1710.01878 (2017).
    [12] Ullrich, Karen, Edward Meeds, and Max Welling. "Soft weight-sharing for neural network compression." arXiv preprint arXiv:1702.04008 (2017).
    [13] Guo, Yiwen, Anbang Yao, and Yurong Chen. "Dynamic network surgery for efficient dnns." arXiv preprint arXiv:1608.04493 (2016).
    [14] Wen, Wei, et al. "Learning structured sparsity in deep neural networks." Advances in neural information processing systems 29 (2016): 2074-2082.
    [15] Liu, Zhuang, et al. "Learning efficient convolutional networks through network slimming." Proceedings of the IEEE international conference on computer vision. 2017.
    [16] Ye, Jianbo, et al. "Rethinking the smaller-norm-less-informative assumption in channel pruning of convolution layers." arXiv preprint arXiv:1802.00124 (2018).
    [17] Liu, Xingyu, et al. "Efficient sparse-winograd convolutional neural networks." arXiv preprint arXiv:1802.06367 (2018).
    [18] Wang, Huan, et al. "Neural pruning via growing regularization." arXiv preprint arXiv:2012.09243 (2020).
    [19] Zhou, Hattie, et al. "Deconstructing lottery tickets: Zeros, signs, and the supermask." arXiv preprint arXiv:1905.01067 (2019).
    [20] Han, Song, et al. "Dsd: Dense-sparse-dense training for deep neural networks." arXiv preprint arXiv:1607.04381 (2016).

    下載圖示
    QR CODE