Please use this identifier to cite or link to this item: https://ah.lib.nccu.edu.tw/handle/140.119/124868
DC FieldValueLanguage
dc.contributor.advisor蔡炎龍zh_TW
dc.contributor.advisorTsai, Yen-Lungen_US
dc.contributor.author許嘉宏zh_TW
dc.contributor.authorHsu, Chia-Hungen_US
dc.creator許嘉宏zh_TW
dc.creatorHsu, Chia-Hungen_US
dc.date2019en_US
dc.date.accessioned2019-08-07T08:35:21Z-
dc.date.available2019-08-07T08:35:21Z-
dc.date.issued2019-08-07T08:35:21Z-
dc.identifierG0104751003en_US
dc.identifier.urihttp://nccur.lib.nccu.edu.tw/handle/140.119/124868-
dc.description碩士zh_TW
dc.description國立政治大學zh_TW
dc.description應用數學系zh_TW
dc.description104751003zh_TW
dc.description.abstract本篇文章主要使用卷積神經網路來進行圖像辨識,資料來源用台北故宮 博物院線上資料庫,其中圖像收藏量三萬筆,本篇將範圍縮小至畫軸的部 分,總計有 4 千筆,因為每張圖像有主要主題跟次要主題,無法直接用卷 積神經網路來分類。所以先利用 SLIC 演算法將圖像分割,再來進行標籤及 訓練模型。最後如有新的作品要進行辨識,也進行同樣分割,用模型辨識 後,再統整結果得到此作品有哪些主題性。zh_TW
dc.description.abstractIn this paper, we want to recognize one image with multiple genres. We collected data from National Palace Museun. If we just use traditional CNN to recognize it, we only get one genre with one image. Hence, we segment image with SLIC algorithm. It can segment image into fixed size with similar range, then we can use them to train the model. After training, if we get the new image, we can use SILC algorithm with same parameter and put it in the model. Then we can recognize this new image with multiple genres.en_US
dc.description.tableofcontents第一章 Introduction 1\n第二章 Deep Learning 3\n第一節 Neurons and Neural Networks 4\n第二節 Activation Function 7\n第三節 Loss Function 9\n第四節 Gradient Descent Method 11\n第三章 Convolutional Neural Network 13\n第一節 Convolution Layer 13\n第二節 Pooling Layer 22\n第四章 Data Collection and Processing 25\n第一節 K-means Clustering 25\n第二節 SLIC Algorithm 27\n第三節 Make the Labels 29\n第五章 Model Construction 33\n第一節 Models Structure 33\n第二節 Transfer Learning 37\n第三節 Imbalance Data 38\n第四節 Result 38\n第六章 Conclusion 41\nAppendix A Python Script 42\nA.1 Segment the Painting 42\nA.2 Create the Label 45\nA.3 Train the Model with the Model D 47\nA.4 Train the model with Transfer Learning 50\nA.5 GUI 53\nBibliography 64zh_TW
dc.format.extent13356256 bytes-
dc.format.mimetypeapplication/pdf-
dc.source.urihttp://thesis.lib.nccu.edu.tw/record/#G0104751003en_US
dc.subject深度學習zh_TW
dc.subject卷積神經網路zh_TW
dc.subject影像辨識zh_TW
dc.subjectDeep Learningen_US
dc.subjectNerural Networken_US
dc.subjectCNNen_US
dc.subjectImage Recognitionen_US
dc.title深度學習於國畫主題辨識之應用zh_TW
dc.titleIdentifying Chinese painting genres with deep learningen_US
dc.typethesisen_US
dc.relation.reference[1]RadhakrishnaAchanta,AppuShaji,KevinSmith,AurelienLucchi,PascalFua,andSabine Süsstrunk. Slic superpixels, 2010.\n\n[2] Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. CoRR, abs/1409.0473, 2014.\n\n[3] John Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and stochastic optimization. J. Mach. Learn. Res., 12:2121–2159, 2011.\n\n[4] Ross B. Girshick, Jeff Donahue, Trevor Darrell, and Jitendra Malik. Rich feature hierarchies for accurate object detection and semantic segmentation. CoRR, abs/ 1311.2524, 2013.\n\n[5] J. B. Heaton, N. G. Polson, and J. H. Witte. Deep learning in finance. CoRR, abs/ 1602.06561, 2016.\n\n[6] Donald Hebb. The The Organization of Behavior. 1949.\n\n[7] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. Imagenet classification with deep convolutional neural networks. In Peter L. Bartlett, Fernando C. N. Pereira, Christopher J. C. Burges, Léon Bottou, and Kilian Q. Weinberger, editors, NIPS, pages 1106–1114, 2012.\n\n[8] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In F. Pereira, C. J. C. Burges, L. Bottou, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 25, pages 1097– 1105. Curran Associates, Inc., 2012.\n\n[9] Jan Kukačka, Vladimir Golkov, and Daniel Cremers. Regularization for deep learning: A taxonomy, 2017.\n\n[10]S.Lawrence,C.L.Giles,AhChungTsoi,andA.D.Back.Facerecognition:aconvolutional neural-network approach. Neural Networks, IEEE Transactions on, 8(1):98–113, January 1997.\n\n[11] Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. Nature, 521(7553): 436–444, may 2015.\n\n[12] Min Lin, Qiang Chen, and Shuicheng Yan. Network in network, 2013.\n\n[13] National Palace Museun. 書畫典藏資料檢索系統, 2019.\n\n[14] Sinno Jialin Pan and Qiang Yang. A survey on transfer learning. IEEE Trans. on Knowl. and Data Eng., 22(10):1345–1359, October 2010.\n\n[15] Tara N. Sainath, Abdel rahman Mohamed, Brian Kingsbury, and Bhuvana Ramabhadran. Deep convolutional neural networks for lvcsr. In Acoustics, Speech and Signal Processing (ICASSP), 2013 IEEE International Conference on, 2013.\n\n[16] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. In International Conference on Learning Representations, 2015.\n\n[17] Jonathan J Tompson, Arjun Jain, Yann LeCun, and Christoph Bregler. Joint training of a convolutional network and a graphical model for human pose estimation. In Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 27, pages 1799–1807. Curran Associates, Inc., 2014.\n\n[18] H. Y. Xiong, B. Alipanahi, L. J. Lee, H. Bretschneider, D. Merico, R. K. C. Yuen, Y. Hua, S. Gueroussov, H. S. Najafabadi, T. R. Hughes, Q. Morris, Y. Barash, A. R. Krainer, N. Jojic, S. W. Scherer, B. J. Blencowe, and B. J. Frey. The human splicing code reveals new insights into the genetic determinants of disease. Science, 347(6218):1254806– 1254806, dec 2014.zh_TW
dc.identifier.doi10.6814/NCCU201900448en_US
item.grantfulltextrestricted-
item.openairetypethesis-
item.openairecristypehttp://purl.org/coar/resource_type/c_46ec-
item.cerifentitytypePublications-
item.fulltextWith Fulltext-
Appears in Collections:學位論文
Files in This Item:
File SizeFormat
100301.pdf13.04 MBAdobe PDF2View/Open
Show simple item record

Google ScholarTM

Check

Altmetric

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.