Please use this identifier to cite or link to this item: https://ah.lib.nccu.edu.tw/handle/140.119/141182
DC FieldValueLanguage
dc.contributor.advisor蔡炎龍zh_TW
dc.contributor.advisorTsai, Yen-Lungen_US
dc.contributor.author林奕勳zh_TW
dc.contributor.authorLin, Yi-Hsunen_US
dc.creator林奕勳zh_TW
dc.creatorLin, Yi-Hsunen_US
dc.date2022en_US
dc.date.accessioned2022-08-01T10:13:06Z-
dc.date.available2022-08-01T10:13:06Z-
dc.date.issued2022-08-01T10:13:06Z-
dc.identifierG0109751004en_US
dc.identifier.urihttp://nccur.lib.nccu.edu.tw/handle/140.119/141182-
dc.description碩士zh_TW
dc.description國立政治大學zh_TW
dc.description應用數學系zh_TW
dc.description109751004zh_TW
dc.description.abstract自從Transformer 發表後,無疑為自然語言處理領域的立下新的里程碑,許多的模型也因應而起,分別在各自然語言處理項目有傑出的表現。如此強大的模型多數背後依靠巨量的參數運算,但各模型皆以英文為發展主軸,我們很難訓練一個一樣強的中文模型,在缺乏原生中文模型的情況下,我們利用現有的資源及模型訓練機器做中文文章摘要,使用BERT 及GPT-2,搭配中研院中文詞知識庫小組的中文模型,並採用新聞資料進行訓練。先透過BERT 從原文章獲得抽取式摘要,使文章篇幅縮短並保留住重要資訊,接著使用GPT-2 從抽取過的摘要中再進行生成式摘要,去除掉重複的資訊並使語句更平順。在我們的實驗中,我們獲得了不錯的中文文章摘要,證明這個方法是有效的。zh_TW
dc.description.abstractSince the publication of Transformer, it has undoubtedly set a new milestone in the field of Natural Language Processing, and many models have also been released depending on it and performed outstandingly in various Natural Language Processing tasks. Most of such powerful models rely on a large number of parameter operations, but most of them are developed in English, and it is difficult for us to train a Chinese model that is equally strong. In the absence of native Chinese models, we use existing resources and model to train the machine to make Chinese article summaries: using BERT and GPT-2 model, with the Chinese model of the Chinese Knowledge and Information Processing of the Academia Sinica of Taiwan, and using news datasets for training. First, use BERT to obtained an extractive summarization from the original article, so that the length of the article is shortened and important information is retained, then use GPT-2 to generate a summarization from the extracted summary to remove duplicate information and make the sentence smoother. In our experiments, we obtained decent Chinese article summaries, proving that this method is effective.en_US
dc.description.tableofcontents1 Introduction 1\n2 Deep Learning 2\n2.1 Neurons and Neural Networks 4\n2.2 Activation Function 6\n2.3 Loss Function 8\n2.4 Gradient Descent Method 10\n3 Word Embeddings 12\n3.1 Word2Vec 12\n3.2 GloVe 13\n3.3 FastText 14\n4 Transformer 16\n4.1 Embeddings 16\n4.2 Encoder 18\n4.3 Decoder 22\n5 Contextualized Word Embeddings 24\n5.1 ELMo 24\n5.2 BERT 25\n5.3 GPT-2 27\n6 Summarization 28\n6.1 Two methods of summarization 28\n6.2 TextRank 29\n6.3 BERTSUM 30\n7 Experiments 33\n7.1 Data Preparation 34\n7.2 Extractive Summarization with BERTSUM 34\n7.3 Abstractive Summarization with GPT-2 35\n7.4 Result 36\n8 Conclusion 39\nBibliography 40zh_TW
dc.format.extent4294382 bytes-
dc.format.mimetypeapplication/pdf-
dc.source.urihttp://thesis.lib.nccu.edu.tw/record/#G0109751004en_US
dc.subjectTransformerzh_TW
dc.subjectBERTzh_TW
dc.subjectGPT-2zh_TW
dc.subject中文文章摘要zh_TW
dc.subject抽取式摘要zh_TW
dc.subject生成式摘要zh_TW
dc.subject深度學習zh_TW
dc.subjectTransformeren_US
dc.subjectBERTen_US
dc.subjectGPT-2en_US
dc.subjectChinese article summarizationen_US
dc.subjectExtractive summarizationen_US
dc.subjectAbstractive summarizationen_US
dc.subjectDeep learningen_US
dc.titleTransformer 應用於中文文章摘要zh_TW
dc.titleUsing Transformer for Chinese article summarizationen_US
dc.typethesisen_US
dc.relation.reference[1] Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. Layer normalization. arXiv preprint arXiv:1607.06450, 2016.\n[2] Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473, 2014.\n[3] Yoshua Bengio, Aaron Courville, and Pascal Vincent. Representation learning: A review and new perspectives. IEEE transactions on pattern analysis and machine intelligence, 35(8):1798–1828, 2013.\n[4] Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. Enriching word vectors with subword information, 2016.\n[5] Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901, 2020.\n[6] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre- training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018.\n[7] Kunihiko Fukushima. Neural network model for a mechanism of pattern recognition unaffected by shift in position-neocognitron. IEICE Technical Report, A, 62(10):658–665, 1979.\n[8] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.\n[9] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.\n[10] Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural computation, 9(8):1735–1780, 1997.\n[11] Anil K Jain, Jianchang Mao, and K Moidin Mohiuddin. Artificial neural networks: A tutorial. Computer, 29(3):31–44, 1996.\n[12] Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. nature, 521(7553):436–444, 2015.\n[13] Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998.\n[14] Moshe Leshno, Vladimir Ya Lin, Allan Pinkus, and Shimon Schocken. Multilayer feedforward networks with a nonpolynomial activation function can approximate any function. Neural networks, 6(6):861–867, 1993.\n[15] Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461, 2019.\n[16] YangLiuandMirellaLapata.Textsummarizationwithpretrainedencoders.arXivpreprint arXiv:1908.08345, 2019.\n[17] Rada Mihalcea and Paul Tarau. Textrank: Bringing order into text. In Proceedings of the 2004 conference on empirical methods in natural language processing, pages 404–411, 2004.\n[18] Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781, 2013.\n[19] Vinod Nair and Geoffrey E Hinton. Rectified linear units improve restricted boltzmann machines. In Icml, 2010.\n[20] LawrencePage,SergeyBrin,RajeevMotwani,andTerryWinograd.Thepagerankcitation ranking: Bringing order to the web. Technical report, Stanford InfoLab, 1999.\n[21] Jeffrey Pennington, Richard Socher, and Christopher D Manning. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532–1543, 2014.\n[22] MatthewE.Peters,MarkNeumann,MohitIyyer,MattGardner,ChristopherClark,Kenton Lee, and Luke Zettlemoyer. Deep contextualized word representations, 2018.\n[23] Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. Improving language understanding by generative pre-training. 2018.\n[24] AlecRadford,JeffreyWu,RewonChild,DavidLuan,DarioAmodei,IlyaSutskever,etal. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9, 2019.\n[25] Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, et al. Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res., 21(140):1–67, 2020.\n[26] David E Rumelhart, Geoffrey E Hinton, and Ronald J Williams. Learning representations by back-propagating errors. nature, 323(6088):533–536, 1986.\n[27] Jürgen Schmidhuber. Deep learning in neural networks: An overview. Neural Networks, 61:85–117, Jan 2015.\n[28] David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. Mastering the game of go with deep neural networks and tree search. nature, 529(7587):484–489, 2016.\n[29] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. Advances in neural information processing systems, 30, 2017.\n[30] Ronald J Williams and David Zipser. A learning algorithm for continually running fully recurrent neural networks. Neural computation, 1(2):270–280, 1989.\n[31] W Yonghui, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144, 2016.zh_TW
dc.identifier.doi10.6814/NCCU202200797en_US
item.cerifentitytypePublications-
item.fulltextWith Fulltext-
item.grantfulltextrestricted-
item.openairetypethesis-
item.openairecristypehttp://purl.org/coar/resource_type/c_46ec-
Appears in Collections:學位論文
Files in This Item:
File Description SizeFormat
100401.pdf4.19 MBAdobe PDF2View/Open
Show simple item record

Google ScholarTM

Check

Altmetric

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.