Please use this identifier to cite or link to this item: https://ah.lib.nccu.edu.tw/handle/140.119/131976
題名: 遞歸及自注意力類神經網路之強健性分析
Analysis of the robustness of recurrent and self-attentive neural networks
作者: 謝育倫
Hsieh, Yu-Lun
貢獻者: 許聞廉<br>劉昭麟
Hsu, Wen-Lian<br>Liu, Chao-Lin
謝育倫
Hsieh, Yu-Lun
關鍵詞: 自我注意力機制
對抗性輸入
遞歸類神經網路
長短期記憶
強健性分析
Robustness
Self attention
Adversarial input
RNN
LSTM
日期: 2020
上傳時間: 2-Sep-2020
摘要: 本文主要在驗證目前被廣泛應用的深度學習方法,即利用類神經網路所建構的機器學習模型,在自然語言處理領域中之成效。同時,我們對各式模型進行了一系列的強健性分析,其中主要包含了觀察這些模型對於對抗性(adversarial)輸入擾動之抵抗力。更進一步來說,本文所進行的實驗對象,包含了近期受到許多注目的 Transformer 模型,也就是建構在自我注意力機制之上的一種類神經網路,以及目前常用的,基於長短期記憶 (LSTM)細胞所搭建的遞歸類神經網路等等不同網路架構,觀察其應用於自然語言處理上的結果與差異。在實驗內容上,我們囊括了許多在自然語言處理領域中最常見的工作,例如:文本分類、斷詞及詞類標註、情緒分類、蘊含分析、文件摘要及機器翻譯等。結果發現,基於自我注意力的 Transformer 架構在絕大多數的工作上都有較為優異的表現。除了使用不同網路架構並對其成效進行評估,我們也對輸入之資料加以對抗性擾動,以測試不同模型在可靠度上的差異。另外,我們同時提出一些創新的方法來產生有效的對抗性輸入擾動。更重要的是,我們基於前述實驗結果提出理論上的分析與解釋,以探討不同類神經網路架構之間強健性差異的可能來源。
In this work, we focus on investigating the effectiveness of current deep learning methods, also known as neural network-based models, in the field of natural language processing. Additionally, we conduct robustness analysis of various neural model architectures. We evaluate the neural network`s resistance to adversarial input perturbations, which in essence is replacing the input words so that the model might produce incorrect results or predictions. We compare the differences between various network architectures, including the Transformer network based on the self-attention mechanism, and the commonly employed recurrent neural networks using long short-term memory cells (LSTM). We conduct extensive experiments that include the most common tasks in the field of natural language processing: sentence classification, word segmentation and part-of-speech tagging, sentiment classification, entailment analysis, abstractive document summarization, and machine translation. In the process, we evaluate their effectiveness as compared with other state-of-the-art approaches. We then estimate the robustness of different models against adversarial examples through five attack methods. Most importantly, we propose a series of innovative methods to generate adversarial input perturbations, and devise theoretical analysis from our observations. Finally, we attempt to interpret the differences in robustness between neural network models.
參考文獻: [1] Martín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Mané, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Viégas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. TensorFlow: Large-scale machine learning on heterogeneous systems, 2015. URL http://tensorflow.org/. Software available from tensorflow.org.\n[2] Antti Airola, Sampo Pyysalo, Jari Björne, Tapio Pahikkala, Filip Ginter, and Tapio Salakoski. All-paths graph kernel for protein-protein interaction extraction with evaluation of cross-corpus learning. BMC bioinformatics, 9(11):S2, 2008.\n[3] Moustafa Alzantot, Yash Sharma, Ahmed Elgohary, Bo-Jhang Ho, Mani Srivastava, and Kai-Wei Chang. Generating natural language adversarial examples. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2890– 2896. Association for Computational Linguistics, 2018. URL http://aclweb.org/ anthology/D18-1316.\n[4] Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473, 2014.\n[5] Yonatan Belinkov and Yonatan Bisk. Synthetic and natural noise both break neural machine translation. In International Conference on Learning Representations, 2018. URL https://openreview.net/forum?id=BJ8vJebC-.\n[6] Razvan Bunescu, Ruifang Ge, Rohit J Kate, Edward M Marcotte, Raymond J Mooney, Arun K Ramani, and Yuk Wah Wong. Comparative experiments on learning information extractors for proteins and their interactions. Artificial intelligence in medicine, 33(2): 139–155, 2005.\n[7] Nicholas Carlini and David Wagner. Towards evaluating the robustness of neural networks. In Security and Privacy (SP), 2017 IEEE Symposium on, pages 39–57. IEEE, 2017.\n[8] R Caruana. Multitask learning. autonomous agents and multi-agent systems. 1998.\n[9] Samuel WK Chan and Mickey WC Chong. An agent-based approach to Chinese word\nsegmentation. In IJCNLP, pages 112–114, 2008.\n[10] Y. C. Chang, C. H. Chu, Y. C. Su, C. C. Chen, and W. L. Hsu. PIPE: a protein-protein interaction passage extraction module for BioCreative challenge. Database (Oxford), 2016, 2016. ISSN 1758-0463. doi: 10.1093/database/baw101.\n[11] Aitao Chen, Yiping Zhou, Anne Zhang, and Gordon Sun. Unigram language model for Chinese word segmentation. In Proceedings of the 4th SIGHAN Workshop on Chinese Language Processing, pages 138–141. Association for Computational Linguistics Jeju Island, Korea, 2005.\n[12] Pin-Yu Chen, Huan Zhang, Yash Sharma, Jinfeng Yi, and Cho-Jui Hsieh. Zoo: Zeroth order optimization based black-box attacks to deep neural networks without training substitute models. In Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, pages 15–26. ACM, 2017.\n[13] Qian Chen, Xiao-Dan Zhu, Zhen-Hua Ling, Si Wei, and Hui Jiang. Distraction-based neural networks for modeling document. In IJCAI, volume 16, pages 2754–2760, 2016.\n[14] Minhao Cheng, Jinfeng Yi, Huan Zhang, Pin-Yu Chen, and Cho-Jui Hsieh. Seq2sick: Evaluating the robustness of sequence-to-sequence models with adversarial examples. CoRR, abs/1803.01128, 2018. URL http://arxiv.org/abs/1803.01128.\n[15] Kyunghyun Cho, Bart van Merriënboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. Learning phrase representations using rnn encoder–decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1724–1734, 2014.\n[16] Ronan Collobert and Jason Weston. A unified architecture for natural language processing: Deep neural networks with multitask learning. In Proceedings of the 25th international conference on Machine learning, pages 160–167, 2008.\n[17] Jacob Devlin. BERT: Pre-training of deep bidirectional transformers for language understanding. In Seminar at Stanford, 2019.\n[18] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, 2019.\n[19] Clevert Djork-Arné, Thomas Unterthiner, and Sepp Hochreiter. Fast and accurate deep network learning by exponential linear units (ELUs). In Proceedings of the International Conference on Learning Representations (ICLR), volume 6, 2016.\n[20] Javid Ebrahimi, Anyi Rao, Daniel Lowd, and Dejing Dou. Hotflip: White-box adversarial examples for text classification. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), volume 2, pages 31–36, 2018.\n[21] Jeffrey L Elman. Distributed representations, simple recurrent networks, and grammatical structure. Machine learning, 7(2-3):195–225, 1991.\n[22] Jianfeng Gao, Mu Li, Andi Wu, and Chang-Ning Huang. Chinese word segmentation and named entity recognition: A pragmatic approach. Computational Linguistics, 31(4): 531–574, 2005.\n[23] Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N Dauphin. Convolutional sequence to sequence learning. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 1243–1252. JMLR. org, 2017.\n[24] Ian Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. In International Conference on Learning Representations, 2015.\n[25] Ian Goodfellow, Yoshua Bengio, and Aaron Courville. Deep learning. MIT press, 2016.\n[26] Alex Graves, Santiago Fernández, and Jürgen Schmidhuber. Bidirectional lstm networks for improved phoneme classification and recognition. Artificial Neural Networks: Formal Models and Their Applications–ICANN 2005, pages 753–753, 2005.\n[27] Klaus Greff, Rupesh K Srivastava, Jan Koutník, Bas R Steunebrink, and Jürgen Schmidhuber. LSTM: A search space odyssey. IEEE transactions on neural networks and learning systems, PP(99):1–11, 2017.\n[28] Thanh-Le Ha, Jan Niehues, and Alexander Waibel. Toward multilingual neural machine translation with universal encoder and decoder. arXiv preprint arXiv:1611.04798, 2016.\n[29] Martin Haspelmath. Word classes/parts of speech. In International Encyclopedia of the Social and Behavioral Sciences, pages 16538–16545. Pergamon Pr., 2001.\n[30] Dan Hendrycks and Kevin Gimpel. Gaussian error linear units (GELUs). arXiv preprint arXiv:1606.08415, 2016.\n[31] Sepp Hochreiter and Jürgen Schmidhuber. Lstm can solve hard long time lag problems. In Advances in neural information processing systems, pages 473–479, 1997.\n[32] Yu-Ming Hsieh, Ming-Hong Bai, Jason S Chang, and Keh-Jiann Chen. Improving PCFG chinese parsing with context-dependent probability re-estimation. CLP 2012, page 216, 2012.\n[33] Baotian Hu, Qingcai Chen, and Fangze Zhu. LCSTS: A large scale chinese short text summarization dataset. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1967–1972, 2015.\n[34] Lei Hua and Chanqin Quan. A shortest dependency path based convolutional neural network for protein-protein relation extraction. BioMed Research International, 2016, 2016.\n[35] Mohit Iyyer, John Wieting, Kevin Gimpel, and Luke Zettlemoyer. Adversarial example generation with syntactically controlled paraphrase networks. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), volume 1, pages 1875–1885, 2018.\n[36] Aaron J Jacobs and Yuk Wah Wong. Maximum entropy word segmentation of Chinese text. In COLING* ACL 2006, page 185, 2006.\n[37] Robin Jia and Percy Liang. Adversarial examples for evaluating reading comprehension systems. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2021–2031, 2017.\n[38] Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, and Yonghui Wu. Exploring the limits of language modeling. arXiv preprint arXiv:1602.02410, 2016.\n[39] Alexey Kurakin, Ian J. Goodfellow, and Samy Bengio. Adversarial examples in the physical world. CoRR, abs/1607.02533, 2016.\n[40] Moshe Leshno, Vladimir Ya. Lin, Allan Pinkus, and Shimon Schocken. Multilayer feedforward networks with a nonpolynomial activation function can approximate any function. Neural Networks, 6(6):861 – 867, 1993. ISSN 0893-6080. doi: https://doi. org/10.1016/S0893-6080(05)80131-5. URL http://www.sciencedirect.com/ science/article/pii/S0893608005801315.\n74\n[41] Jiwei Li, Will Monroe, and Dan Jurafsky. Understanding neural networks through representation erasure. arXiv preprint arXiv:1612.08220, 2016.\n[42] Lishuang Li, Rui Guo, Zhenchao Jiang, and Degen Huang. An approach to improve kernel-based protein-protein interaction extraction by learning from large-scale network data. Methods, 83:44 – 50, 2015. ISSN 1046-2023.\n[43] Bin Liang, Hongcheng Li, Miaoqiang Su, Pan Bian, Xirong Li, and Wenchang Shi. Deep text classification can be fooled. arXiv preprint arXiv:1704.08006, 2017.\n[44] Chin-Yew Lin. Rouge: Recall-oriented understudy for gisting evaluation. In Proceedings of the Workshop on Text Summarization, 2003.\n[45] Jin Kiat Low, Hwee Tou Ng, and Wenyuan Guo. A maximum entropy approach to Chinese word segmentation. In Proceedings of the Fourth SIGHAN Workshop on Chinese Language Processing, volume 1612164, pages 448–455, 2005.\n[46] Thang Luong, Hieu Pham, and Christopher D Manning. Effective approaches to attention-based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1412–1421, 2015.\n[47] Wei-Yun Ma and Keh-Jiann Chen. Introduction to ckip Chinese word segmentation system for the first international Chinese word segmentation bakeoff. In Proceedings of the second SIGHAN workshop on Chinese language processing-Volume 17, pages 168–171. Association for Computational Linguistics, 2003.\n[48] Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv: 1706.06083, 2017.\n[49] Xinnian Mao, Yuan Dong, Saike He, Sencheng Bao, and Haila Wang. Chinese word segmentation and named entity recognition based on conditional random fields. In IJCNLP, pages 90–93, 2008.\n[50] Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pages 3111–3119, 2013.\n[51] Makoto Miwa, Rune Sætre, Yusuke Miyao, and Jun’ichi Tsujii. Protein–protein interaction extraction by leveraging multiple kernels and parsers. International journal of medical informatics, 78(12):e39–e46, 2009.\n[52] Hirotada Mori. From the sequence to cell modeling: comprehensive functional genomics in escherichia coli. BMB Reports, 37(1):83–92, 2004.\n[53] Nicolas Papernot, Patrick McDaniel, Ananthram Swami, and Richard Harang. Crafting adversarial input sequences for recurrent neural networks. In Military Communications Conference, MILCOM 2016-2016 IEEE, pages 49–54. IEEE, 2016.\n[54] Nicolas Papernot, Patrick McDaniel, Ian Goodfellow, Somesh Jha, Z Berkay Celik, and Ananthram Swami. Practical black-box attacks against machine learning. In Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security, pages 506–519. ACM, 2017.\n[55] Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on Association for Computational Linguistics, pages 311–318. Association for Computational Linguistics, 2002.\n[56] Fuchun Peng, Fangfang Feng, and Andrew McCallum. Chinese segmentation and new word detection using conditional random fields. In Proceedings of the 20th international conference on Computational Linguistics, page 562. Association for Computational Linguistics, 2004.\n[57] Yifan Peng and Zhiyong Lu. Deep learning for extracting protein-protein interactions from biomedical literature. In Proceedings of the 2017 Workshop on Biomedical Natural Language Processing, 2017. to appear.\n[58] Sampo Pyysalo, Filip Ginter, Juho Heimonen, Jari Björne, Jorma Boberg, Jouni Järvinen, and Tapio Salakoski. Bioinfer: a corpus for information extraction in the biomedical domain. BMC bioinformatics, 8(1):50, 2007.\n[59] L. H. Qian and G. D. Zhou. Tree kernel-based protein–protein interaction extraction from biomedical literature. Journal of Biomedical Informatics, 45(3):535 – 543, 2012. ISSN 1532-0464.\n[60] Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. Improving language understanding by generative pre-training. Preprint. 2018.\n[61] Anna Rogers, Olga Kovaleva, and Anna Rumshisky. A primer in BERTology: What we know about how bert works. arXiv preprint arXiv:2002.12327, 2020.\n[62] Sebastian Ruder. Neural transfer learning for natural language processing. PhD thesis, NUI Galway, 2019.\n[63] David E Rumelhart, Geoffrey E Hinton, and Ronald J Williams. Learning representations by back-propagating errors. Nature, 323(6088):533–536, 1986.\n[64] Bruce W Suter. The multilayer perceptron as an approximation to a bayes optimal discriminant function. IEEE Transactions on Neural Networks, 1(4):291, 1990.\n[65] Ilya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pages 3104–3112, 2014.\n[66] Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199, 2013.\n[67] Gongbo Tang, Mathias Müller, Annette Rios, and Rico Sennrich. Why self-attention? a targeted evaluation of neural machine translation architectures. In Conference on Empirical Methods in Natural Language Processing, pages 4263–4272, 2018.\n[68] Tijmen Tieleman and Geoffrey Hinton. Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural networks for machine learning, 4(2), 2012.\n[69] A. M. TURING. I.—COMPUTING MACHINERY AND INTELLIGENCE. Mind, LIX (236):433–460, 10 1950. ISSN 0026-4423. doi: 10.1093/mind/LIX.236.433. URL https://doi.org/10.1093/mind/LIX.236.433.\n[70] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in Neural Information Processing Systems, pages 5998–6008, 2017.\n[71] Xinhao Wang, Xiaojun Lin, Dianhai Yu, Hao Tian, and Xihong Wu. Chinese word segmentation with maximum entropy and n-gram language model. In COLING* ACL 2006, page 138, 2006.\n[72] Warren Weaver. Translation. Machine translation of languages, 14:15–23, 1955.\n[73] Paul J. Werbos. Generalization of backpropagation with application to a recurrent gas market model. Neural Networks, 1(4):339–356, 1988. ISSN 0893-6080. doi: https://doi.org/10.1016/0893-6080(88)90007-X. URL http://www.sciencedirect.com/science/article/pii/089360808890007X.\n[74] Adina Williams, Nikita Nangia, and Samuel Bowman. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112–1122. Association for Computational Linguistics, 2018.\n[75] Chia-Wei Wu, Shyh-Yi Jan, Richard Tzong-Han Tsai, and Wen-Lian Hsu. On using ensemble methods for Chinese named entity recognition. In Proceedings of the Fifth SIGHAN Workshop on Chinese Language Processing, pages 142–145, Sydney, Australia, July 2006. Association for Computational Linguistics. URL http://www.aclweb. org/anthology/W/W06/W06-0122.\n[76] Jin Yang, Jean Senellart, and Remi Zajac. Systran`s Chinese word segmentation. In Proceedings of the second SIGHAN workshop on Chinese language processing-Volume 17, pages 180–183. Association for Computational Linguistics, 2003.\n[77] Puyudi Yang, Jianbo Chen, Cho-Jui Hsieh, Jane-Ling Wang, and Michael I Jordan. Greedy attack and gumbel attack: Generating adversarial examples for discrete data. arXiv preprint arXiv:1805.12316, 2018.\n[78] Xiaofeng Yu, Marine Carpuat, and Dekai Wu. Boosting for Chinese named entity recognition. In Proceedings of the Fifth SIGHAN Workshop on Chinese Language Processing, pages 150–153, 2006.\n[79] Yong Yu, Xiaosheng Si, Changhua Hu, and Jianxun Zhang. A review of recurrent neural networks: Lstm cells and network architectures. Neural computation, 31(7):1235–1270, 2019.\n[80] Xiang Zhang, Junbo Zhao, and Yann LeCun. Character-level convolutional networks for text classification. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems 28, pages 649–657. Curran Associates, Inc., 2015.\n[81] Hai Zhao, Chang-Ning Huang, Mu Li, et al. An improved Chinese word segmentation system with conditional random field. In Proceedings of the Fifth SIGHAN Workshop on Chinese Language Processing, volume 1082117. Sydney: July, 2006.\n[82] Zhengli Zhao, Dheeru Dua, and Sameer Singh. Generating natural adversarial examples. In International Conference on Learning Representations, 2018. URL https:// openreview.net/forum?id=H1BLjgZCb.
描述: 博士
國立政治大學
社群網路與人智計算國際研究生博士學位學程(TIGP)
103761503
資料來源: http://thesis.lib.nccu.edu.tw/record/#G0103761503
資料類型: thesis
Appears in Collections:學位論文

Files in This Item:
File Description SizeFormat
150301.pdf1.15 MBAdobe PDF2View/Open
Show full item record

Google ScholarTM

Check

Altmetric

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.