學術產出-學位論文
文章檢視/開啟
書目匯出
-
題名 遞歸及自注意力類神經網路之強健性分析
Analysis of the robustness of recurrent and self-attentive neural networks作者 謝育倫
Hsieh, Yu-Lun貢獻者 許聞廉<br>劉昭麟
Hsu, Wen-Lian<br>Liu, Chao-Lin
謝育倫
Hsieh, Yu-Lun關鍵詞 自我注意力機制
對抗性輸入
遞歸類神經網路
長短期記憶
強健性分析
Robustness
Self attention
Adversarial input
RNN
LSTM日期 2020 上傳時間 2-九月-2020 13:22:20 (UTC+8) 摘要 本文主要在驗證目前被廣泛應用的深度學習方法,即利用類神經網路所建構的機器學習模型,在自然語言處理領域中之成效。同時,我們對各式模型進行了一系列的強健性分析,其中主要包含了觀察這些模型對於對抗性(adversarial)輸入擾動之抵抗力。更進一步來說,本文所進行的實驗對象,包含了近期受到許多注目的 Transformer 模型,也就是建構在自我注意力機制之上的一種類神經網路,以及目前常用的,基於長短期記憶 (LSTM)細胞所搭建的遞歸類神經網路等等不同網路架構,觀察其應用於自然語言處理上的結果與差異。在實驗內容上,我們囊括了許多在自然語言處理領域中最常見的工作,例如:文本分類、斷詞及詞類標註、情緒分類、蘊含分析、文件摘要及機器翻譯等。結果發現,基於自我注意力的 Transformer 架構在絕大多數的工作上都有較為優異的表現。除了使用不同網路架構並對其成效進行評估,我們也對輸入之資料加以對抗性擾動,以測試不同模型在可靠度上的差異。另外,我們同時提出一些創新的方法來產生有效的對抗性輸入擾動。更重要的是,我們基於前述實驗結果提出理論上的分析與解釋,以探討不同類神經網路架構之間強健性差異的可能來源。
In this work, we focus on investigating the effectiveness of current deep learning methods, also known as neural network-based models, in the field of natural language processing. Additionally, we conduct robustness analysis of various neural model architectures. We evaluate the neural network`s resistance to adversarial input perturbations, which in essence is replacing the input words so that the model might produce incorrect results or predictions. We compare the differences between various network architectures, including the Transformer network based on the self-attention mechanism, and the commonly employed recurrent neural networks using long short-term memory cells (LSTM). We conduct extensive experiments that include the most common tasks in the field of natural language processing: sentence classification, word segmentation and part-of-speech tagging, sentiment classification, entailment analysis, abstractive document summarization, and machine translation. In the process, we evaluate their effectiveness as compared with other state-of-the-art approaches. We then estimate the robustness of different models against adversarial examples through five attack methods. Most importantly, we propose a series of innovative methods to generate adversarial input perturbations, and devise theoretical analysis from our observations. Finally, we attempt to interpret the differences in robustness between neural network models.參考文獻 [1] Martín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Mané, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Viégas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. TensorFlow: Large-scale machine learning on heterogeneous systems, 2015. URL http://tensorflow.org/. Software available from tensorflow.org.[2] Antti Airola, Sampo Pyysalo, Jari Björne, Tapio Pahikkala, Filip Ginter, and Tapio Salakoski. All-paths graph kernel for protein-protein interaction extraction with evaluation of cross-corpus learning. BMC bioinformatics, 9(11):S2, 2008.[3] Moustafa Alzantot, Yash Sharma, Ahmed Elgohary, Bo-Jhang Ho, Mani Srivastava, and Kai-Wei Chang. Generating natural language adversarial examples. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2890– 2896. Association for Computational Linguistics, 2018. URL http://aclweb.org/ anthology/D18-1316.[4] Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473, 2014.[5] Yonatan Belinkov and Yonatan Bisk. Synthetic and natural noise both break neural machine translation. In International Conference on Learning Representations, 2018. URL https://openreview.net/forum?id=BJ8vJebC-.[6] Razvan Bunescu, Ruifang Ge, Rohit J Kate, Edward M Marcotte, Raymond J Mooney, Arun K Ramani, and Yuk Wah Wong. Comparative experiments on learning information extractors for proteins and their interactions. Artificial intelligence in medicine, 33(2): 139–155, 2005.[7] Nicholas Carlini and David Wagner. Towards evaluating the robustness of neural networks. In Security and Privacy (SP), 2017 IEEE Symposium on, pages 39–57. IEEE, 2017.[8] R Caruana. Multitask learning. autonomous agents and multi-agent systems. 1998.[9] Samuel WK Chan and Mickey WC Chong. An agent-based approach to Chinese wordsegmentation. In IJCNLP, pages 112–114, 2008.[10] Y. C. Chang, C. H. Chu, Y. C. Su, C. C. Chen, and W. L. Hsu. PIPE: a protein-protein interaction passage extraction module for BioCreative challenge. Database (Oxford), 2016, 2016. ISSN 1758-0463. doi: 10.1093/database/baw101.[11] Aitao Chen, Yiping Zhou, Anne Zhang, and Gordon Sun. Unigram language model for Chinese word segmentation. In Proceedings of the 4th SIGHAN Workshop on Chinese Language Processing, pages 138–141. Association for Computational Linguistics Jeju Island, Korea, 2005.[12] Pin-Yu Chen, Huan Zhang, Yash Sharma, Jinfeng Yi, and Cho-Jui Hsieh. Zoo: Zeroth order optimization based black-box attacks to deep neural networks without training substitute models. In Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, pages 15–26. ACM, 2017.[13] Qian Chen, Xiao-Dan Zhu, Zhen-Hua Ling, Si Wei, and Hui Jiang. Distraction-based neural networks for modeling document. In IJCAI, volume 16, pages 2754–2760, 2016.[14] Minhao Cheng, Jinfeng Yi, Huan Zhang, Pin-Yu Chen, and Cho-Jui Hsieh. Seq2sick: Evaluating the robustness of sequence-to-sequence models with adversarial examples. CoRR, abs/1803.01128, 2018. URL http://arxiv.org/abs/1803.01128.[15] Kyunghyun Cho, Bart van Merriënboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. Learning phrase representations using rnn encoder–decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1724–1734, 2014.[16] Ronan Collobert and Jason Weston. A unified architecture for natural language processing: Deep neural networks with multitask learning. In Proceedings of the 25th international conference on Machine learning, pages 160–167, 2008.[17] Jacob Devlin. BERT: Pre-training of deep bidirectional transformers for language understanding. In Seminar at Stanford, 2019.[18] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, 2019.[19] Clevert Djork-Arné, Thomas Unterthiner, and Sepp Hochreiter. Fast and accurate deep network learning by exponential linear units (ELUs). In Proceedings of the International Conference on Learning Representations (ICLR), volume 6, 2016.[20] Javid Ebrahimi, Anyi Rao, Daniel Lowd, and Dejing Dou. Hotflip: White-box adversarial examples for text classification. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), volume 2, pages 31–36, 2018.[21] Jeffrey L Elman. Distributed representations, simple recurrent networks, and grammatical structure. Machine learning, 7(2-3):195–225, 1991.[22] Jianfeng Gao, Mu Li, Andi Wu, and Chang-Ning Huang. Chinese word segmentation and named entity recognition: A pragmatic approach. Computational Linguistics, 31(4): 531–574, 2005.[23] Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N Dauphin. Convolutional sequence to sequence learning. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 1243–1252. JMLR. org, 2017.[24] Ian Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. In International Conference on Learning Representations, 2015.[25] Ian Goodfellow, Yoshua Bengio, and Aaron Courville. Deep learning. MIT press, 2016.[26] Alex Graves, Santiago Fernández, and Jürgen Schmidhuber. Bidirectional lstm networks for improved phoneme classification and recognition. Artificial Neural Networks: Formal Models and Their Applications–ICANN 2005, pages 753–753, 2005.[27] Klaus Greff, Rupesh K Srivastava, Jan Koutník, Bas R Steunebrink, and Jürgen Schmidhuber. LSTM: A search space odyssey. IEEE transactions on neural networks and learning systems, PP(99):1–11, 2017.[28] Thanh-Le Ha, Jan Niehues, and Alexander Waibel. Toward multilingual neural machine translation with universal encoder and decoder. arXiv preprint arXiv:1611.04798, 2016.[29] Martin Haspelmath. Word classes/parts of speech. In International Encyclopedia of the Social and Behavioral Sciences, pages 16538–16545. Pergamon Pr., 2001.[30] Dan Hendrycks and Kevin Gimpel. Gaussian error linear units (GELUs). arXiv preprint arXiv:1606.08415, 2016.[31] Sepp Hochreiter and Jürgen Schmidhuber. Lstm can solve hard long time lag problems. In Advances in neural information processing systems, pages 473–479, 1997.[32] Yu-Ming Hsieh, Ming-Hong Bai, Jason S Chang, and Keh-Jiann Chen. Improving PCFG chinese parsing with context-dependent probability re-estimation. CLP 2012, page 216, 2012.[33] Baotian Hu, Qingcai Chen, and Fangze Zhu. LCSTS: A large scale chinese short text summarization dataset. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1967–1972, 2015.[34] Lei Hua and Chanqin Quan. A shortest dependency path based convolutional neural network for protein-protein relation extraction. BioMed Research International, 2016, 2016.[35] Mohit Iyyer, John Wieting, Kevin Gimpel, and Luke Zettlemoyer. Adversarial example generation with syntactically controlled paraphrase networks. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), volume 1, pages 1875–1885, 2018.[36] Aaron J Jacobs and Yuk Wah Wong. Maximum entropy word segmentation of Chinese text. In COLING* ACL 2006, page 185, 2006.[37] Robin Jia and Percy Liang. Adversarial examples for evaluating reading comprehension systems. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2021–2031, 2017.[38] Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, and Yonghui Wu. Exploring the limits of language modeling. arXiv preprint arXiv:1602.02410, 2016.[39] Alexey Kurakin, Ian J. Goodfellow, and Samy Bengio. Adversarial examples in the physical world. CoRR, abs/1607.02533, 2016.[40] Moshe Leshno, Vladimir Ya. Lin, Allan Pinkus, and Shimon Schocken. Multilayer feedforward networks with a nonpolynomial activation function can approximate any function. Neural Networks, 6(6):861 – 867, 1993. ISSN 0893-6080. doi: https://doi. org/10.1016/S0893-6080(05)80131-5. URL http://www.sciencedirect.com/ science/article/pii/S0893608005801315.74[41] Jiwei Li, Will Monroe, and Dan Jurafsky. Understanding neural networks through representation erasure. arXiv preprint arXiv:1612.08220, 2016.[42] Lishuang Li, Rui Guo, Zhenchao Jiang, and Degen Huang. An approach to improve kernel-based protein-protein interaction extraction by learning from large-scale network data. Methods, 83:44 – 50, 2015. ISSN 1046-2023.[43] Bin Liang, Hongcheng Li, Miaoqiang Su, Pan Bian, Xirong Li, and Wenchang Shi. Deep text classification can be fooled. arXiv preprint arXiv:1704.08006, 2017.[44] Chin-Yew Lin. Rouge: Recall-oriented understudy for gisting evaluation. In Proceedings of the Workshop on Text Summarization, 2003.[45] Jin Kiat Low, Hwee Tou Ng, and Wenyuan Guo. A maximum entropy approach to Chinese word segmentation. In Proceedings of the Fourth SIGHAN Workshop on Chinese Language Processing, volume 1612164, pages 448–455, 2005.[46] Thang Luong, Hieu Pham, and Christopher D Manning. Effective approaches to attention-based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1412–1421, 2015.[47] Wei-Yun Ma and Keh-Jiann Chen. Introduction to ckip Chinese word segmentation system for the first international Chinese word segmentation bakeoff. In Proceedings of the second SIGHAN workshop on Chinese language processing-Volume 17, pages 168–171. Association for Computational Linguistics, 2003.[48] Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv: 1706.06083, 2017.[49] Xinnian Mao, Yuan Dong, Saike He, Sencheng Bao, and Haila Wang. Chinese word segmentation and named entity recognition based on conditional random fields. In IJCNLP, pages 90–93, 2008.[50] Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pages 3111–3119, 2013.[51] Makoto Miwa, Rune Sætre, Yusuke Miyao, and Jun’ichi Tsujii. Protein–protein interaction extraction by leveraging multiple kernels and parsers. International journal of medical informatics, 78(12):e39–e46, 2009.[52] Hirotada Mori. From the sequence to cell modeling: comprehensive functional genomics in escherichia coli. BMB Reports, 37(1):83–92, 2004.[53] Nicolas Papernot, Patrick McDaniel, Ananthram Swami, and Richard Harang. Crafting adversarial input sequences for recurrent neural networks. In Military Communications Conference, MILCOM 2016-2016 IEEE, pages 49–54. IEEE, 2016.[54] Nicolas Papernot, Patrick McDaniel, Ian Goodfellow, Somesh Jha, Z Berkay Celik, and Ananthram Swami. Practical black-box attacks against machine learning. In Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security, pages 506–519. ACM, 2017.[55] Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on Association for Computational Linguistics, pages 311–318. Association for Computational Linguistics, 2002.[56] Fuchun Peng, Fangfang Feng, and Andrew McCallum. Chinese segmentation and new word detection using conditional random fields. In Proceedings of the 20th international conference on Computational Linguistics, page 562. Association for Computational Linguistics, 2004.[57] Yifan Peng and Zhiyong Lu. Deep learning for extracting protein-protein interactions from biomedical literature. In Proceedings of the 2017 Workshop on Biomedical Natural Language Processing, 2017. to appear.[58] Sampo Pyysalo, Filip Ginter, Juho Heimonen, Jari Björne, Jorma Boberg, Jouni Järvinen, and Tapio Salakoski. Bioinfer: a corpus for information extraction in the biomedical domain. BMC bioinformatics, 8(1):50, 2007.[59] L. H. Qian and G. D. Zhou. Tree kernel-based protein–protein interaction extraction from biomedical literature. Journal of Biomedical Informatics, 45(3):535 – 543, 2012. ISSN 1532-0464.[60] Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. Improving language understanding by generative pre-training. Preprint. 2018.[61] Anna Rogers, Olga Kovaleva, and Anna Rumshisky. A primer in BERTology: What we know about how bert works. arXiv preprint arXiv:2002.12327, 2020.[62] Sebastian Ruder. Neural transfer learning for natural language processing. PhD thesis, NUI Galway, 2019.[63] David E Rumelhart, Geoffrey E Hinton, and Ronald J Williams. Learning representations by back-propagating errors. Nature, 323(6088):533–536, 1986.[64] Bruce W Suter. The multilayer perceptron as an approximation to a bayes optimal discriminant function. IEEE Transactions on Neural Networks, 1(4):291, 1990.[65] Ilya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pages 3104–3112, 2014.[66] Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199, 2013.[67] Gongbo Tang, Mathias Müller, Annette Rios, and Rico Sennrich. Why self-attention? a targeted evaluation of neural machine translation architectures. In Conference on Empirical Methods in Natural Language Processing, pages 4263–4272, 2018.[68] Tijmen Tieleman and Geoffrey Hinton. Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural networks for machine learning, 4(2), 2012.[69] A. M. TURING. I.—COMPUTING MACHINERY AND INTELLIGENCE. Mind, LIX (236):433–460, 10 1950. ISSN 0026-4423. doi: 10.1093/mind/LIX.236.433. URL https://doi.org/10.1093/mind/LIX.236.433.[70] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in Neural Information Processing Systems, pages 5998–6008, 2017.[71] Xinhao Wang, Xiaojun Lin, Dianhai Yu, Hao Tian, and Xihong Wu. Chinese word segmentation with maximum entropy and n-gram language model. In COLING* ACL 2006, page 138, 2006.[72] Warren Weaver. Translation. Machine translation of languages, 14:15–23, 1955.[73] Paul J. Werbos. Generalization of backpropagation with application to a recurrent gas market model. Neural Networks, 1(4):339–356, 1988. ISSN 0893-6080. doi: https://doi.org/10.1016/0893-6080(88)90007-X. URL http://www.sciencedirect.com/science/article/pii/089360808890007X.[74] Adina Williams, Nikita Nangia, and Samuel Bowman. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112–1122. Association for Computational Linguistics, 2018.[75] Chia-Wei Wu, Shyh-Yi Jan, Richard Tzong-Han Tsai, and Wen-Lian Hsu. On using ensemble methods for Chinese named entity recognition. In Proceedings of the Fifth SIGHAN Workshop on Chinese Language Processing, pages 142–145, Sydney, Australia, July 2006. Association for Computational Linguistics. URL http://www.aclweb. org/anthology/W/W06/W06-0122.[76] Jin Yang, Jean Senellart, and Remi Zajac. Systran`s Chinese word segmentation. In Proceedings of the second SIGHAN workshop on Chinese language processing-Volume 17, pages 180–183. Association for Computational Linguistics, 2003.[77] Puyudi Yang, Jianbo Chen, Cho-Jui Hsieh, Jane-Ling Wang, and Michael I Jordan. Greedy attack and gumbel attack: Generating adversarial examples for discrete data. arXiv preprint arXiv:1805.12316, 2018.[78] Xiaofeng Yu, Marine Carpuat, and Dekai Wu. Boosting for Chinese named entity recognition. In Proceedings of the Fifth SIGHAN Workshop on Chinese Language Processing, pages 150–153, 2006.[79] Yong Yu, Xiaosheng Si, Changhua Hu, and Jianxun Zhang. A review of recurrent neural networks: Lstm cells and network architectures. Neural computation, 31(7):1235–1270, 2019.[80] Xiang Zhang, Junbo Zhao, and Yann LeCun. Character-level convolutional networks for text classification. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems 28, pages 649–657. Curran Associates, Inc., 2015.[81] Hai Zhao, Chang-Ning Huang, Mu Li, et al. An improved Chinese word segmentation system with conditional random field. In Proceedings of the Fifth SIGHAN Workshop on Chinese Language Processing, volume 1082117. Sydney: July, 2006.[82] Zhengli Zhao, Dheeru Dua, and Sameer Singh. Generating natural adversarial examples. In International Conference on Learning Representations, 2018. URL https:// openreview.net/forum?id=H1BLjgZCb. 描述 博士
國立政治大學
社群網路與人智計算國際研究生博士學位學程(TIGP)
103761503資料來源 http://thesis.lib.nccu.edu.tw/record/#G0103761503 資料類型 thesis dc.contributor.advisor 許聞廉<br>劉昭麟 zh_TW dc.contributor.advisor Hsu, Wen-Lian<br>Liu, Chao-Lin en_US dc.contributor.author (作者) 謝育倫 zh_TW dc.contributor.author (作者) Hsieh, Yu-Lun en_US dc.creator (作者) 謝育倫 zh_TW dc.creator (作者) Hsieh, Yu-Lun en_US dc.date (日期) 2020 en_US dc.date.accessioned 2-九月-2020 13:22:20 (UTC+8) - dc.date.available 2-九月-2020 13:22:20 (UTC+8) - dc.date.issued (上傳時間) 2-九月-2020 13:22:20 (UTC+8) - dc.identifier (其他 識別碼) G0103761503 en_US dc.identifier.uri (URI) http://nccur.lib.nccu.edu.tw/handle/140.119/131976 - dc.description (描述) 博士 zh_TW dc.description (描述) 國立政治大學 zh_TW dc.description (描述) 社群網路與人智計算國際研究生博士學位學程(TIGP) zh_TW dc.description (描述) 103761503 zh_TW dc.description.abstract (摘要) 本文主要在驗證目前被廣泛應用的深度學習方法,即利用類神經網路所建構的機器學習模型,在自然語言處理領域中之成效。同時,我們對各式模型進行了一系列的強健性分析,其中主要包含了觀察這些模型對於對抗性(adversarial)輸入擾動之抵抗力。更進一步來說,本文所進行的實驗對象,包含了近期受到許多注目的 Transformer 模型,也就是建構在自我注意力機制之上的一種類神經網路,以及目前常用的,基於長短期記憶 (LSTM)細胞所搭建的遞歸類神經網路等等不同網路架構,觀察其應用於自然語言處理上的結果與差異。在實驗內容上,我們囊括了許多在自然語言處理領域中最常見的工作,例如:文本分類、斷詞及詞類標註、情緒分類、蘊含分析、文件摘要及機器翻譯等。結果發現,基於自我注意力的 Transformer 架構在絕大多數的工作上都有較為優異的表現。除了使用不同網路架構並對其成效進行評估,我們也對輸入之資料加以對抗性擾動,以測試不同模型在可靠度上的差異。另外,我們同時提出一些創新的方法來產生有效的對抗性輸入擾動。更重要的是,我們基於前述實驗結果提出理論上的分析與解釋,以探討不同類神經網路架構之間強健性差異的可能來源。 zh_TW dc.description.abstract (摘要) In this work, we focus on investigating the effectiveness of current deep learning methods, also known as neural network-based models, in the field of natural language processing. Additionally, we conduct robustness analysis of various neural model architectures. We evaluate the neural network`s resistance to adversarial input perturbations, which in essence is replacing the input words so that the model might produce incorrect results or predictions. We compare the differences between various network architectures, including the Transformer network based on the self-attention mechanism, and the commonly employed recurrent neural networks using long short-term memory cells (LSTM). We conduct extensive experiments that include the most common tasks in the field of natural language processing: sentence classification, word segmentation and part-of-speech tagging, sentiment classification, entailment analysis, abstractive document summarization, and machine translation. In the process, we evaluate their effectiveness as compared with other state-of-the-art approaches. We then estimate the robustness of different models against adversarial examples through five attack methods. Most importantly, we propose a series of innovative methods to generate adversarial input perturbations, and devise theoretical analysis from our observations. Finally, we attempt to interpret the differences in robustness between neural network models. en_US dc.description.tableofcontents 致謝 i中文摘要 iiiAbstract ivContents viList of Tables xList of Figures xiii1 Introduction 11.1 Motivation 11.2 Research Objectives 31.3 Outline 51.4 Publications 52 Background and Related Work 82.1 Natural Language Processing 82.2 Neural Networks 102.2.1 Activation Functions 122.2.2 Recurrent Neural Networks 132.2.3 Long Short-term Memory 152.2.4 Training 162.3 Attention Mechanisms 182.3.1 Self-Attention 192.4 Adversarial Attack 222.4.1 Pre-training and Multi-task Learning 232.5 Evaluation Metrics 243 Methods 263.1 Neural Networks 263.1.1 Recurrent Neural Networks 263.1.2 Self-Attentive Models 273.2 Adversarial Attack Methods 293.2.1 Random Attack 303.2.2 List-based Attack 303.2.3 Greedy Select & Greedy Replace 313.2.4 Greedy Select with Embedding Constraint 323.2.5 Attention-based Select 324 Experiments 344.1 Text Sequence Classification in Biomedical Literature 344.1.1 Experimental Setup 364.1.2 Results 374.2 Sequence Labeling 394.2.1 Experimental Setup 404.2.2 Results and Discussion 424.3 Sentiment Analysis 444.3.1 Results 454.3.2 Quality of Adversarial Examples 474.4 Textual Entailment 494.4.1 Results 494.4.2 Quality of Adversarial Examples 504.5 Abstractive Summarization 524.5.1 Experimental Setup 524.5.2 Results 534.6 Machine Translation 584.6.1 Results 584.7 Summary 595 Discussions 635.1 Theoretical Analysis 635.1.1 Sensitivity of Self-attention Layer 635.1.2 Illustration of the proposed theory 656 Conclusions 676.1 Theoretical Implications 676.2 Unsolved Problems 68Bibliography 69 zh_TW dc.format.extent 1173678 bytes - dc.format.mimetype application/pdf - dc.source.uri (資料來源) http://thesis.lib.nccu.edu.tw/record/#G0103761503 en_US dc.subject (關鍵詞) 自我注意力機制 zh_TW dc.subject (關鍵詞) 對抗性輸入 zh_TW dc.subject (關鍵詞) 遞歸類神經網路 zh_TW dc.subject (關鍵詞) 長短期記憶 zh_TW dc.subject (關鍵詞) 強健性分析 zh_TW dc.subject (關鍵詞) Robustness en_US dc.subject (關鍵詞) Self attention en_US dc.subject (關鍵詞) Adversarial input en_US dc.subject (關鍵詞) RNN en_US dc.subject (關鍵詞) LSTM en_US dc.title (題名) 遞歸及自注意力類神經網路之強健性分析 zh_TW dc.title (題名) Analysis of the robustness of recurrent and self-attentive neural networks en_US dc.type (資料類型) thesis en_US dc.relation.reference (參考文獻) [1] Martín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Mané, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Viégas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. TensorFlow: Large-scale machine learning on heterogeneous systems, 2015. URL http://tensorflow.org/. Software available from tensorflow.org.[2] Antti Airola, Sampo Pyysalo, Jari Björne, Tapio Pahikkala, Filip Ginter, and Tapio Salakoski. All-paths graph kernel for protein-protein interaction extraction with evaluation of cross-corpus learning. BMC bioinformatics, 9(11):S2, 2008.[3] Moustafa Alzantot, Yash Sharma, Ahmed Elgohary, Bo-Jhang Ho, Mani Srivastava, and Kai-Wei Chang. Generating natural language adversarial examples. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2890– 2896. Association for Computational Linguistics, 2018. URL http://aclweb.org/ anthology/D18-1316.[4] Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473, 2014.[5] Yonatan Belinkov and Yonatan Bisk. Synthetic and natural noise both break neural machine translation. In International Conference on Learning Representations, 2018. URL https://openreview.net/forum?id=BJ8vJebC-.[6] Razvan Bunescu, Ruifang Ge, Rohit J Kate, Edward M Marcotte, Raymond J Mooney, Arun K Ramani, and Yuk Wah Wong. Comparative experiments on learning information extractors for proteins and their interactions. Artificial intelligence in medicine, 33(2): 139–155, 2005.[7] Nicholas Carlini and David Wagner. Towards evaluating the robustness of neural networks. In Security and Privacy (SP), 2017 IEEE Symposium on, pages 39–57. IEEE, 2017.[8] R Caruana. Multitask learning. autonomous agents and multi-agent systems. 1998.[9] Samuel WK Chan and Mickey WC Chong. An agent-based approach to Chinese wordsegmentation. In IJCNLP, pages 112–114, 2008.[10] Y. C. Chang, C. H. Chu, Y. C. Su, C. C. Chen, and W. L. Hsu. PIPE: a protein-protein interaction passage extraction module for BioCreative challenge. Database (Oxford), 2016, 2016. ISSN 1758-0463. doi: 10.1093/database/baw101.[11] Aitao Chen, Yiping Zhou, Anne Zhang, and Gordon Sun. Unigram language model for Chinese word segmentation. In Proceedings of the 4th SIGHAN Workshop on Chinese Language Processing, pages 138–141. Association for Computational Linguistics Jeju Island, Korea, 2005.[12] Pin-Yu Chen, Huan Zhang, Yash Sharma, Jinfeng Yi, and Cho-Jui Hsieh. Zoo: Zeroth order optimization based black-box attacks to deep neural networks without training substitute models. In Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, pages 15–26. ACM, 2017.[13] Qian Chen, Xiao-Dan Zhu, Zhen-Hua Ling, Si Wei, and Hui Jiang. Distraction-based neural networks for modeling document. In IJCAI, volume 16, pages 2754–2760, 2016.[14] Minhao Cheng, Jinfeng Yi, Huan Zhang, Pin-Yu Chen, and Cho-Jui Hsieh. Seq2sick: Evaluating the robustness of sequence-to-sequence models with adversarial examples. CoRR, abs/1803.01128, 2018. URL http://arxiv.org/abs/1803.01128.[15] Kyunghyun Cho, Bart van Merriënboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. Learning phrase representations using rnn encoder–decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1724–1734, 2014.[16] Ronan Collobert and Jason Weston. A unified architecture for natural language processing: Deep neural networks with multitask learning. In Proceedings of the 25th international conference on Machine learning, pages 160–167, 2008.[17] Jacob Devlin. BERT: Pre-training of deep bidirectional transformers for language understanding. In Seminar at Stanford, 2019.[18] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, 2019.[19] Clevert Djork-Arné, Thomas Unterthiner, and Sepp Hochreiter. Fast and accurate deep network learning by exponential linear units (ELUs). In Proceedings of the International Conference on Learning Representations (ICLR), volume 6, 2016.[20] Javid Ebrahimi, Anyi Rao, Daniel Lowd, and Dejing Dou. Hotflip: White-box adversarial examples for text classification. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), volume 2, pages 31–36, 2018.[21] Jeffrey L Elman. Distributed representations, simple recurrent networks, and grammatical structure. Machine learning, 7(2-3):195–225, 1991.[22] Jianfeng Gao, Mu Li, Andi Wu, and Chang-Ning Huang. Chinese word segmentation and named entity recognition: A pragmatic approach. Computational Linguistics, 31(4): 531–574, 2005.[23] Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N Dauphin. Convolutional sequence to sequence learning. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 1243–1252. JMLR. org, 2017.[24] Ian Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. In International Conference on Learning Representations, 2015.[25] Ian Goodfellow, Yoshua Bengio, and Aaron Courville. Deep learning. MIT press, 2016.[26] Alex Graves, Santiago Fernández, and Jürgen Schmidhuber. Bidirectional lstm networks for improved phoneme classification and recognition. Artificial Neural Networks: Formal Models and Their Applications–ICANN 2005, pages 753–753, 2005.[27] Klaus Greff, Rupesh K Srivastava, Jan Koutník, Bas R Steunebrink, and Jürgen Schmidhuber. LSTM: A search space odyssey. IEEE transactions on neural networks and learning systems, PP(99):1–11, 2017.[28] Thanh-Le Ha, Jan Niehues, and Alexander Waibel. Toward multilingual neural machine translation with universal encoder and decoder. arXiv preprint arXiv:1611.04798, 2016.[29] Martin Haspelmath. Word classes/parts of speech. In International Encyclopedia of the Social and Behavioral Sciences, pages 16538–16545. Pergamon Pr., 2001.[30] Dan Hendrycks and Kevin Gimpel. Gaussian error linear units (GELUs). arXiv preprint arXiv:1606.08415, 2016.[31] Sepp Hochreiter and Jürgen Schmidhuber. Lstm can solve hard long time lag problems. In Advances in neural information processing systems, pages 473–479, 1997.[32] Yu-Ming Hsieh, Ming-Hong Bai, Jason S Chang, and Keh-Jiann Chen. Improving PCFG chinese parsing with context-dependent probability re-estimation. CLP 2012, page 216, 2012.[33] Baotian Hu, Qingcai Chen, and Fangze Zhu. LCSTS: A large scale chinese short text summarization dataset. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1967–1972, 2015.[34] Lei Hua and Chanqin Quan. A shortest dependency path based convolutional neural network for protein-protein relation extraction. BioMed Research International, 2016, 2016.[35] Mohit Iyyer, John Wieting, Kevin Gimpel, and Luke Zettlemoyer. Adversarial example generation with syntactically controlled paraphrase networks. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), volume 1, pages 1875–1885, 2018.[36] Aaron J Jacobs and Yuk Wah Wong. Maximum entropy word segmentation of Chinese text. In COLING* ACL 2006, page 185, 2006.[37] Robin Jia and Percy Liang. Adversarial examples for evaluating reading comprehension systems. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2021–2031, 2017.[38] Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, and Yonghui Wu. Exploring the limits of language modeling. arXiv preprint arXiv:1602.02410, 2016.[39] Alexey Kurakin, Ian J. Goodfellow, and Samy Bengio. Adversarial examples in the physical world. CoRR, abs/1607.02533, 2016.[40] Moshe Leshno, Vladimir Ya. Lin, Allan Pinkus, and Shimon Schocken. Multilayer feedforward networks with a nonpolynomial activation function can approximate any function. Neural Networks, 6(6):861 – 867, 1993. ISSN 0893-6080. doi: https://doi. org/10.1016/S0893-6080(05)80131-5. URL http://www.sciencedirect.com/ science/article/pii/S0893608005801315.74[41] Jiwei Li, Will Monroe, and Dan Jurafsky. Understanding neural networks through representation erasure. arXiv preprint arXiv:1612.08220, 2016.[42] Lishuang Li, Rui Guo, Zhenchao Jiang, and Degen Huang. An approach to improve kernel-based protein-protein interaction extraction by learning from large-scale network data. Methods, 83:44 – 50, 2015. ISSN 1046-2023.[43] Bin Liang, Hongcheng Li, Miaoqiang Su, Pan Bian, Xirong Li, and Wenchang Shi. Deep text classification can be fooled. arXiv preprint arXiv:1704.08006, 2017.[44] Chin-Yew Lin. Rouge: Recall-oriented understudy for gisting evaluation. In Proceedings of the Workshop on Text Summarization, 2003.[45] Jin Kiat Low, Hwee Tou Ng, and Wenyuan Guo. A maximum entropy approach to Chinese word segmentation. In Proceedings of the Fourth SIGHAN Workshop on Chinese Language Processing, volume 1612164, pages 448–455, 2005.[46] Thang Luong, Hieu Pham, and Christopher D Manning. Effective approaches to attention-based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1412–1421, 2015.[47] Wei-Yun Ma and Keh-Jiann Chen. Introduction to ckip Chinese word segmentation system for the first international Chinese word segmentation bakeoff. In Proceedings of the second SIGHAN workshop on Chinese language processing-Volume 17, pages 168–171. Association for Computational Linguistics, 2003.[48] Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv: 1706.06083, 2017.[49] Xinnian Mao, Yuan Dong, Saike He, Sencheng Bao, and Haila Wang. Chinese word segmentation and named entity recognition based on conditional random fields. In IJCNLP, pages 90–93, 2008.[50] Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pages 3111–3119, 2013.[51] Makoto Miwa, Rune Sætre, Yusuke Miyao, and Jun’ichi Tsujii. Protein–protein interaction extraction by leveraging multiple kernels and parsers. International journal of medical informatics, 78(12):e39–e46, 2009.[52] Hirotada Mori. From the sequence to cell modeling: comprehensive functional genomics in escherichia coli. BMB Reports, 37(1):83–92, 2004.[53] Nicolas Papernot, Patrick McDaniel, Ananthram Swami, and Richard Harang. Crafting adversarial input sequences for recurrent neural networks. In Military Communications Conference, MILCOM 2016-2016 IEEE, pages 49–54. IEEE, 2016.[54] Nicolas Papernot, Patrick McDaniel, Ian Goodfellow, Somesh Jha, Z Berkay Celik, and Ananthram Swami. Practical black-box attacks against machine learning. In Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security, pages 506–519. ACM, 2017.[55] Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on Association for Computational Linguistics, pages 311–318. Association for Computational Linguistics, 2002.[56] Fuchun Peng, Fangfang Feng, and Andrew McCallum. Chinese segmentation and new word detection using conditional random fields. In Proceedings of the 20th international conference on Computational Linguistics, page 562. Association for Computational Linguistics, 2004.[57] Yifan Peng and Zhiyong Lu. Deep learning for extracting protein-protein interactions from biomedical literature. In Proceedings of the 2017 Workshop on Biomedical Natural Language Processing, 2017. to appear.[58] Sampo Pyysalo, Filip Ginter, Juho Heimonen, Jari Björne, Jorma Boberg, Jouni Järvinen, and Tapio Salakoski. Bioinfer: a corpus for information extraction in the biomedical domain. BMC bioinformatics, 8(1):50, 2007.[59] L. H. Qian and G. D. Zhou. Tree kernel-based protein–protein interaction extraction from biomedical literature. Journal of Biomedical Informatics, 45(3):535 – 543, 2012. ISSN 1532-0464.[60] Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. Improving language understanding by generative pre-training. Preprint. 2018.[61] Anna Rogers, Olga Kovaleva, and Anna Rumshisky. A primer in BERTology: What we know about how bert works. arXiv preprint arXiv:2002.12327, 2020.[62] Sebastian Ruder. Neural transfer learning for natural language processing. PhD thesis, NUI Galway, 2019.[63] David E Rumelhart, Geoffrey E Hinton, and Ronald J Williams. Learning representations by back-propagating errors. Nature, 323(6088):533–536, 1986.[64] Bruce W Suter. The multilayer perceptron as an approximation to a bayes optimal discriminant function. IEEE Transactions on Neural Networks, 1(4):291, 1990.[65] Ilya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pages 3104–3112, 2014.[66] Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199, 2013.[67] Gongbo Tang, Mathias Müller, Annette Rios, and Rico Sennrich. Why self-attention? a targeted evaluation of neural machine translation architectures. In Conference on Empirical Methods in Natural Language Processing, pages 4263–4272, 2018.[68] Tijmen Tieleman and Geoffrey Hinton. Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural networks for machine learning, 4(2), 2012.[69] A. M. TURING. I.—COMPUTING MACHINERY AND INTELLIGENCE. Mind, LIX (236):433–460, 10 1950. ISSN 0026-4423. doi: 10.1093/mind/LIX.236.433. URL https://doi.org/10.1093/mind/LIX.236.433.[70] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in Neural Information Processing Systems, pages 5998–6008, 2017.[71] Xinhao Wang, Xiaojun Lin, Dianhai Yu, Hao Tian, and Xihong Wu. Chinese word segmentation with maximum entropy and n-gram language model. In COLING* ACL 2006, page 138, 2006.[72] Warren Weaver. Translation. Machine translation of languages, 14:15–23, 1955.[73] Paul J. Werbos. Generalization of backpropagation with application to a recurrent gas market model. Neural Networks, 1(4):339–356, 1988. ISSN 0893-6080. doi: https://doi.org/10.1016/0893-6080(88)90007-X. URL http://www.sciencedirect.com/science/article/pii/089360808890007X.[74] Adina Williams, Nikita Nangia, and Samuel Bowman. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112–1122. Association for Computational Linguistics, 2018.[75] Chia-Wei Wu, Shyh-Yi Jan, Richard Tzong-Han Tsai, and Wen-Lian Hsu. On using ensemble methods for Chinese named entity recognition. In Proceedings of the Fifth SIGHAN Workshop on Chinese Language Processing, pages 142–145, Sydney, Australia, July 2006. Association for Computational Linguistics. URL http://www.aclweb. org/anthology/W/W06/W06-0122.[76] Jin Yang, Jean Senellart, and Remi Zajac. Systran`s Chinese word segmentation. In Proceedings of the second SIGHAN workshop on Chinese language processing-Volume 17, pages 180–183. Association for Computational Linguistics, 2003.[77] Puyudi Yang, Jianbo Chen, Cho-Jui Hsieh, Jane-Ling Wang, and Michael I Jordan. Greedy attack and gumbel attack: Generating adversarial examples for discrete data. arXiv preprint arXiv:1805.12316, 2018.[78] Xiaofeng Yu, Marine Carpuat, and Dekai Wu. Boosting for Chinese named entity recognition. In Proceedings of the Fifth SIGHAN Workshop on Chinese Language Processing, pages 150–153, 2006.[79] Yong Yu, Xiaosheng Si, Changhua Hu, and Jianxun Zhang. A review of recurrent neural networks: Lstm cells and network architectures. Neural computation, 31(7):1235–1270, 2019.[80] Xiang Zhang, Junbo Zhao, and Yann LeCun. Character-level convolutional networks for text classification. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems 28, pages 649–657. Curran Associates, Inc., 2015.[81] Hai Zhao, Chang-Ning Huang, Mu Li, et al. An improved Chinese word segmentation system with conditional random field. In Proceedings of the Fifth SIGHAN Workshop on Chinese Language Processing, volume 1082117. Sydney: July, 2006.[82] Zhengli Zhao, Dheeru Dua, and Sameer Singh. Generating natural adversarial examples. In International Conference on Learning Representations, 2018. URL https:// openreview.net/forum?id=H1BLjgZCb. zh_TW dc.identifier.doi (DOI) 10.6814/NCCU202001426 en_US