Publications-Theses

Article View/Open

Publication Export

Google ScholarTM

NCCU Library

Citation Infomation

Related Publications in TAIR

題名 基於卷積核冗餘的神經網路壓縮機制
Compression of Convolutional Neural Networks Based on Kernel Redundancy
作者 陳乃瑋
Chen, Nai-Wei
貢獻者 廖文宏
Liao, Wen-Hung
陳乃瑋
Chen,Nai-Wei
關鍵詞 卷積神經網路
模型壓縮
多樣性演算法
卷積核相似度
Convolutional neural network
Model compression
Diversifying algorithm
Kernel similarity
日期 2018
上傳時間 12-Feb-2019 15:47:36 (UTC+8)
摘要 針對卷積神經網路模型與所需計算量過於龐大,導致難以將之部署於行動裝置或嵌入式平台的問題,本論文提出了多樣性演算法用以對CNN模型做壓縮。多樣性演算法的核心概念為保留各個卷積層中最具代表性的濾波器,僅維持卷積核之間的多樣性。我們將卷積層內的濾波器的相似關係以無向圖來表示,每一個節點皆代表一個濾波器,節點與節點之間邊的值則是兩個濾波器之間的餘弦距離,而哪些節點會被移除則是透過邊的值、該節點邊的值加總以及該節點所代表濾波器的權值絕對值加總來決定。經過剪枝後的模型,每一層卷積層的輸出通道數都會有所減少,參數量與浮點數運算量也會相應的降低,且不會產生稀疏網路的問題,而壓縮後的模型最後會再重新訓練使辨識準確率回復至原先的水準。
為了測試多樣性演算法的泛用性,我們使用VGG、ResNet、DenseNet系列模型與CIFAR-10、CIFAR-100資料集進行了十組的壓縮實驗。為了找到最佳的壓縮參數設置,我們在每組實驗上都做了非常充分的壓縮測試,並詳細記錄所有的實驗結果做為調整參數的重要依據,從各組實驗的壓縮結果顯示,多樣性演算法確實是有效的。在CIFAR-10資料集上,我們的方法可以減少VGG16 78.6%的參數量與約46%的浮點數運算量,而在容許約1%準確率差異的情況下,則可以減少90.7%的參數量與接近70%的浮點數運算量。在CIFAR-100資料集上,可以減少VGG16 46%的參數量與18%的浮點數運算量,而在容許約1%準確率差異的情況下,則可以減少約60.7%的參數量與接近37.5%的浮點數運算量。
The model size and floating-point operations (FLOP) required by convolutional neural networks make it difficult to deploy these models to mobile devices or embedded systems. In this thesis, we propose a method known as the diversifying algorithm to compress CNN models. The key concept is to maintain the diversity of convolutional kernels by preserving the most representative filters in each network layer. This is achieved by expressing the network architecture as an undirected graph. The nodes in the graph denote the filters, and the weights are computed using cosine distance. Nodes are removed by considering the combined effects of several factors, including edge weights, the sum of similarity and the sum of weights. After pruning, the number of output channels in each convolutional layer will be reduced. The compressed network is then retrained to retain accuracy with fewer model parameters, FLOPs and also avoid the problem of sparse representation.
We test the efficacy of the proposed diversifying algorithm on three types of CNN models, including VGG, ResNet and DenseNet using both CIFAR-10 and CIFAR-100 datasets through extensive experiments. On CIFAR-10 dataset, our method is able to reduce 78.6% of total parameters and nearly 46% FLOPs in VGG16. If 1% performance loss is allowed, we can achieve 90.7% parameter and 70% FLOP reduction. On CIFAR-100 dataset, we can reduce 46% parameter and 18% FLOP. Furthermore, we can achieve 60.7% parameter and nearly 37.5% FLOP reduction if 1% accuracy loss is allowed.
參考文獻 [1] Song Han, Huizi Mao, William J. Dally. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. arXiv:1510.00149v5, Feb 2016.
[2] Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, Hans Peter Graf. Pruning Filters for Efficient ConvNets. arXiv:1608.08710v3, Mar 2017.
[3] Forrest N. Iandola, Song Han, Matthew W. Moskewicz, Khalid Ashraf, William J. Dally, Kurt Keutzer. SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size. arXiv:1602.07360v4, Nov 2016.
[4] Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, Hartwig Adam. MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications. arXiv:1704.04861v1, Apr 2017.
[5] Alex Krizhevsky, Vinod Nair, Geoffrey Hinton. CIFAR-10 and CIFAR-100 datasets. https://www.cs.toronto.edu/~kriz/cifar.html, last visited on Jan 2018.
[6] 李宏毅, [DSC 2016]系列活動: 李宏毅/一天搞懂深度學習, https://www.slideshare.net/tw_dsconf/ss-62245351, last visited on Jan 2018.
[7] Yann LeCun, Corinna Cortes, Christopher J.C. Burges. THE MNIST DATABASE of handwritten digits. http://yann.lecun.com/exdb/mnist/, last visited on Oct 2018
[8] ImageNet. http://www.image-net.org/, last visited on Jan 2018.
[9] ImageNet Large Scale Visual Recognition Competition (ILSVRC). http://www.image-net.org/challenges/LSVRC/, last visited on Jan 2018.
[10] Yuanqing Lin, Fengjun Lv, Shenghuo Zhu, Ming Yang, Timothee Cour, Kai Yu, Liangliang Cao, Thomas Huang. Large-scale image classification: Fast feature extraction and SVM training. In Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference, pages 1689-1696. IEEE, 2011.
[11] Alex Krizhevsky, Ilya Sutskever, Geoffrey E. Hinton. ImageNet Classification with Deep Convolutional Neural Networks. In Advances in neural information processing systems, pages 1097-1105, 2012.
[12] Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, Andrew Rabinovich. Going Deeper with Convolutions. arXiv:1409.4842v1, Sep 2014.
[13] Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun. Deep Residual Learning for Image Recognition. IEEE, pages 770-778, 2016.
[14] Kaiming He. Learning Deep Features for Visual Recognition. http://deeplearning.csail.mit.edu/cvpr2017_tutorial_kaiminghe.pdf, last visited on Oct 2018.
[15] Embedded Systems Developer Kits, Modules, & SDKs | NVIDIA Jetson. https://www.nvidia.com/en-us/autonomous-machines/embedded-systems-dev-kits-modules/, last visited on Oct 2018.
[16] Raspberry Pi. https://www.raspberrypi.org/, last visited on Oct 2018.
[17] Song Han, Jeff Pool, John Tran, William J. Dally. Learning both Weights and Connections for Efficient Neural Networks. arXiv:1506.02626v3, Oct 2015.
[18] Babajide O. Ayinde, Jacek M. Zurada. Building Efficient ConvNets using Redundant Feature Pruning. arXiv:1802.07653v1, Feb 2018.
[19] Karen Simonyan, Andrew Zisserman. Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv:1409.1556v6, Apr 2015.
[20] Gao Huang, Zhuang Liu, Laurens van der Maaten, Kilian Q. Weinberger. Densely Connected Convolutional Networks. arXiv:1608.06993v5, Jan 2018.
[21] Sergey Ioffe, Christian Szegedy. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. arXiv:1502.03167v3, Mar 2015.
[22] Rupesh Kumar Srivastava, Klaus Greff, Jürgen Schmidhuber. Highway Networks. arXiv:1505.00387v2, Nov 2015.
[23] TensorFlow. https://www.tensorflow.org/, last visited on Oct 2018.
[24] Keras: Deep Learning for humans. https://github.com/keras-team/keras, last visited on Oct 2018.
描述 碩士
國立政治大學
資訊科學系
105753018
資料來源 http://thesis.lib.nccu.edu.tw/record/#G0105753018
資料類型 thesis
dc.contributor.advisor 廖文宏zh_TW
dc.contributor.advisor Liao, Wen-Hungen_US
dc.contributor.author (Authors) 陳乃瑋zh_TW
dc.contributor.author (Authors) Chen,Nai-Weien_US
dc.creator (作者) 陳乃瑋zh_TW
dc.creator (作者) Chen, Nai-Weien_US
dc.date (日期) 2018en_US
dc.date.accessioned 12-Feb-2019 15:47:36 (UTC+8)-
dc.date.available 12-Feb-2019 15:47:36 (UTC+8)-
dc.date.issued (上傳時間) 12-Feb-2019 15:47:36 (UTC+8)-
dc.identifier (Other Identifiers) G0105753018en_US
dc.identifier.uri (URI) http://nccur.lib.nccu.edu.tw/handle/140.119/122287-
dc.description (描述) 碩士zh_TW
dc.description (描述) 國立政治大學zh_TW
dc.description (描述) 資訊科學系zh_TW
dc.description (描述) 105753018zh_TW
dc.description.abstract (摘要) 針對卷積神經網路模型與所需計算量過於龐大,導致難以將之部署於行動裝置或嵌入式平台的問題,本論文提出了多樣性演算法用以對CNN模型做壓縮。多樣性演算法的核心概念為保留各個卷積層中最具代表性的濾波器,僅維持卷積核之間的多樣性。我們將卷積層內的濾波器的相似關係以無向圖來表示,每一個節點皆代表一個濾波器,節點與節點之間邊的值則是兩個濾波器之間的餘弦距離,而哪些節點會被移除則是透過邊的值、該節點邊的值加總以及該節點所代表濾波器的權值絕對值加總來決定。經過剪枝後的模型,每一層卷積層的輸出通道數都會有所減少,參數量與浮點數運算量也會相應的降低,且不會產生稀疏網路的問題,而壓縮後的模型最後會再重新訓練使辨識準確率回復至原先的水準。
為了測試多樣性演算法的泛用性,我們使用VGG、ResNet、DenseNet系列模型與CIFAR-10、CIFAR-100資料集進行了十組的壓縮實驗。為了找到最佳的壓縮參數設置,我們在每組實驗上都做了非常充分的壓縮測試,並詳細記錄所有的實驗結果做為調整參數的重要依據,從各組實驗的壓縮結果顯示,多樣性演算法確實是有效的。在CIFAR-10資料集上,我們的方法可以減少VGG16 78.6%的參數量與約46%的浮點數運算量,而在容許約1%準確率差異的情況下,則可以減少90.7%的參數量與接近70%的浮點數運算量。在CIFAR-100資料集上,可以減少VGG16 46%的參數量與18%的浮點數運算量,而在容許約1%準確率差異的情況下,則可以減少約60.7%的參數量與接近37.5%的浮點數運算量。
zh_TW
dc.description.abstract (摘要) The model size and floating-point operations (FLOP) required by convolutional neural networks make it difficult to deploy these models to mobile devices or embedded systems. In this thesis, we propose a method known as the diversifying algorithm to compress CNN models. The key concept is to maintain the diversity of convolutional kernels by preserving the most representative filters in each network layer. This is achieved by expressing the network architecture as an undirected graph. The nodes in the graph denote the filters, and the weights are computed using cosine distance. Nodes are removed by considering the combined effects of several factors, including edge weights, the sum of similarity and the sum of weights. After pruning, the number of output channels in each convolutional layer will be reduced. The compressed network is then retrained to retain accuracy with fewer model parameters, FLOPs and also avoid the problem of sparse representation.
We test the efficacy of the proposed diversifying algorithm on three types of CNN models, including VGG, ResNet and DenseNet using both CIFAR-10 and CIFAR-100 datasets through extensive experiments. On CIFAR-10 dataset, our method is able to reduce 78.6% of total parameters and nearly 46% FLOPs in VGG16. If 1% performance loss is allowed, we can achieve 90.7% parameter and 70% FLOP reduction. On CIFAR-100 dataset, we can reduce 46% parameter and 18% FLOP. Furthermore, we can achieve 60.7% parameter and nearly 37.5% FLOP reduction if 1% accuracy loss is allowed.
en_US
dc.description.tableofcontents 摘要......i
Abstract......ii
目錄......iii
表目錄......vi
圖目錄......viii
第一章 緒論......1
1.1 研究背景與動機......1
1.2 研究目的......2
1.3 論文架構......3
第二章 技術背景與相關研究......5
2.1 深度學習的背景與突破......5
2.2 卷積神經網路概述......8
2.2.1 全連接與局部連接......8
2.2.2 權值共享與多卷積核......9
2.2.3 池化層......10
2.2.4 多層卷積......10
2.3 模型壓縮......12
2.3.1 Deep Compression......12
2.3.2 Pruning Filters for Efficient ConvNets......14
2.3.3 Building Efficient ConvNets using Redundant Feature Pruning......16
2.3.4 小結......17
第三章 研究方法......18
3.1 多樣性演算法......18
3.1.1 濾波器相似度分數......19
3.1.2 相似度加總......21
3.1.3 濾波器權值絕對值加總......22
3.1.4 演算法步驟......23
3.2 CNN模型......26
3.2.1 VGG......26
3.2.2 ResNet......27
3.2.3 DenseNet......29
3.3 CIFAR資料集......31
3.4 深度學習套件及開發環境......32
3.5 實驗設計......33
3.5.1 實驗流程......33
3.5.2 分層剪枝......34
3.5.3 相似度門檻值的挑選......34
3.5.4 壓縮實驗模型及相關設定......35
第四章 實驗結果及分析......38
4.1 實驗一:VGG16 (CIFAR-10、CIFAR-100)......38
4.1.1 VGG16 (CIFAR-10)......38
4.1.2 VGG16 (CIFAR-100)......42
4.1.3 VGG16小結......45
4.2 實驗二:ResNet56 / ResNet20 (CIFAR-10、CIFAR-100)......45
4.2.1 ResNet56 (CIFAR-10、CIFAR-100)......46
4.2.2 ResNet20 (CIFAR-10、CIFAR-100)......52
4.2.3 ResNet56 / ResNet20小結......57
4.3 實驗三:DenseNet-40-12 / DenseNet-BC-40-12 (CIFAR-10、CIFAR-100)......58
4.3.1 DenseNet-40-12 (CIAFR-10、CIFAR-100)......59
4.3.2 DenseNet-BC-40-12 (CIFAR-10、CIFAR-100)......65
4.3.3 DenseNet-40-12 / DenseNet-BC-40-12小結......70
4.4 整體實驗結果整理與分析......71
第五章 結論與未來方向......74
參考文獻......76
附錄一 VGG16模型壓縮前後通道數對比......78
附錄二 ResNet56模型壓縮前後通道數對比......79
附錄三 ResNet20模型壓縮前後通道數對比......81
附錄四 DenseNet-40-12模型壓縮前後通道數對比......82
附錄五 DenseNet-BC-40-12模型壓縮前後通道數對比......84
zh_TW
dc.format.extent 4767142 bytes-
dc.format.mimetype application/pdf-
dc.source.uri (資料來源) http://thesis.lib.nccu.edu.tw/record/#G0105753018en_US
dc.subject (關鍵詞) 卷積神經網路zh_TW
dc.subject (關鍵詞) 模型壓縮zh_TW
dc.subject (關鍵詞) 多樣性演算法zh_TW
dc.subject (關鍵詞) 卷積核相似度zh_TW
dc.subject (關鍵詞) Convolutional neural networken_US
dc.subject (關鍵詞) Model compressionen_US
dc.subject (關鍵詞) Diversifying algorithmen_US
dc.subject (關鍵詞) Kernel similarityen_US
dc.title (題名) 基於卷積核冗餘的神經網路壓縮機制zh_TW
dc.title (題名) Compression of Convolutional Neural Networks Based on Kernel Redundancyen_US
dc.type (資料類型) thesisen_US
dc.relation.reference (參考文獻) [1] Song Han, Huizi Mao, William J. Dally. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. arXiv:1510.00149v5, Feb 2016.
[2] Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, Hans Peter Graf. Pruning Filters for Efficient ConvNets. arXiv:1608.08710v3, Mar 2017.
[3] Forrest N. Iandola, Song Han, Matthew W. Moskewicz, Khalid Ashraf, William J. Dally, Kurt Keutzer. SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size. arXiv:1602.07360v4, Nov 2016.
[4] Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, Hartwig Adam. MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications. arXiv:1704.04861v1, Apr 2017.
[5] Alex Krizhevsky, Vinod Nair, Geoffrey Hinton. CIFAR-10 and CIFAR-100 datasets. https://www.cs.toronto.edu/~kriz/cifar.html, last visited on Jan 2018.
[6] 李宏毅, [DSC 2016]系列活動: 李宏毅/一天搞懂深度學習, https://www.slideshare.net/tw_dsconf/ss-62245351, last visited on Jan 2018.
[7] Yann LeCun, Corinna Cortes, Christopher J.C. Burges. THE MNIST DATABASE of handwritten digits. http://yann.lecun.com/exdb/mnist/, last visited on Oct 2018
[8] ImageNet. http://www.image-net.org/, last visited on Jan 2018.
[9] ImageNet Large Scale Visual Recognition Competition (ILSVRC). http://www.image-net.org/challenges/LSVRC/, last visited on Jan 2018.
[10] Yuanqing Lin, Fengjun Lv, Shenghuo Zhu, Ming Yang, Timothee Cour, Kai Yu, Liangliang Cao, Thomas Huang. Large-scale image classification: Fast feature extraction and SVM training. In Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference, pages 1689-1696. IEEE, 2011.
[11] Alex Krizhevsky, Ilya Sutskever, Geoffrey E. Hinton. ImageNet Classification with Deep Convolutional Neural Networks. In Advances in neural information processing systems, pages 1097-1105, 2012.
[12] Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, Andrew Rabinovich. Going Deeper with Convolutions. arXiv:1409.4842v1, Sep 2014.
[13] Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun. Deep Residual Learning for Image Recognition. IEEE, pages 770-778, 2016.
[14] Kaiming He. Learning Deep Features for Visual Recognition. http://deeplearning.csail.mit.edu/cvpr2017_tutorial_kaiminghe.pdf, last visited on Oct 2018.
[15] Embedded Systems Developer Kits, Modules, & SDKs | NVIDIA Jetson. https://www.nvidia.com/en-us/autonomous-machines/embedded-systems-dev-kits-modules/, last visited on Oct 2018.
[16] Raspberry Pi. https://www.raspberrypi.org/, last visited on Oct 2018.
[17] Song Han, Jeff Pool, John Tran, William J. Dally. Learning both Weights and Connections for Efficient Neural Networks. arXiv:1506.02626v3, Oct 2015.
[18] Babajide O. Ayinde, Jacek M. Zurada. Building Efficient ConvNets using Redundant Feature Pruning. arXiv:1802.07653v1, Feb 2018.
[19] Karen Simonyan, Andrew Zisserman. Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv:1409.1556v6, Apr 2015.
[20] Gao Huang, Zhuang Liu, Laurens van der Maaten, Kilian Q. Weinberger. Densely Connected Convolutional Networks. arXiv:1608.06993v5, Jan 2018.
[21] Sergey Ioffe, Christian Szegedy. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. arXiv:1502.03167v3, Mar 2015.
[22] Rupesh Kumar Srivastava, Klaus Greff, Jürgen Schmidhuber. Highway Networks. arXiv:1505.00387v2, Nov 2015.
[23] TensorFlow. https://www.tensorflow.org/, last visited on Oct 2018.
[24] Keras: Deep Learning for humans. https://github.com/keras-team/keras, last visited on Oct 2018.
zh_TW
dc.identifier.doi (DOI) 10.6814/THE.NCCU.CS.003.2019.B02en_US