Please use this identifier to cite or link to this item: https://ah.lib.nccu.edu.tw/handle/140.119/140595
題名: 對抗式 Text-to-Text Transformer 網路於深度序列特徵學習
Adversarial Text-to-Text Transformer Nets for Deep Sequential Feature Learning
作者: 郭耀威
Kuok, Io-Wai
貢獻者: 蕭舜文
Hsiao, Shun-Wen
郭耀威
Kuok, Io-Wai
關鍵詞: 序列
特徵學習
對抗式學習
語言模型
非自然語言處理
Sequential
Feature learning
Adversarial learning
Language model
non-NLP
日期: 2022
上傳時間: 1-Jul-2022
摘要: 數位資料具影響力而且在急速增長,理解這些結構複雜的資訊進行營運改進是一個關鍵課題,但是進行「理解」需要耗盡資源和時間,我們針對以上問題,開發僅以主流可取得的資源上解決問題的方法。我們透過當前最先進的語言模型上套用對抗式方法,加上結合模型多任務學習策略,提出一個高效率的深度序列特徵學習作法。使用提出方法的模型能夠配合不同特性的巨量資料,提取序列中有意義的特徵供作進行任何類別的任務。我們對現實序列事件中不同特性的資料進行實驗,然後與例如 BERT 和 T5 等先進語言模型進行比較,我們也會在提出方法中剝離部分結構作研究。實證結果顯示,我們提出的方法只需使用 BERT-small 三分之一規模的模型,進行自然語言處理能取得相近的效能,而且在部分在非自然語言資料的任務中取得最佳結果。
Digital data is impactive and rapid growing. Learning that complex compound information to improve operations is critical, but those assignments are against time and resources, therefore we aim to develop a method to complete this objective with mainstream hardware. We propose an efficient deep sequential learning approach by applying adversarial techniques and multitask learning strategy to the state-of-the-act language model. The model is capable of adapting massive polymorphous types of sequences, obtaining the sequential representation and preserving the symbolic information for unified heterogeneous downstream tasks. We are conducting several experiments on various characteristic data in real-world sequential events and providing a comprehensive comparison with SOTA NLP models such as T5 and BERT, we are also investigating the ablation result in the comparison. Empirical results show that our approach can accomplish nearly perfect performance in NLP tasks with one-third size of the BERT-small model, and the best result in some non-NLP tasks.
參考文獻: [1] Y. Bengio, A. C. Courville, and P. Vincent, “Representation learning: A review and new perspectives,”\nIEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, pp. 1798–1828, 2013.\n[2] K. P. F.R.S., “Liii. on lines and planes of closest fit to systems of points in space,” Philosophical Magazine Series 1, vol. 2, pp. 559–572.\n[3] M. J. Greenacre and J. Blasius, “Multiple correspondence analysis and related methods,” 2006.\n[4] W.-J. Li, D.-Y. Yeung, and Z. Zhang, “Probabilistic relational pca,” in NIPS, 2009.\n[5] M. Germain, K. Gregor, I. Murray, and H. Larochelle, “Made: Masked autoencoder for distribution estimation,” in ICML, 2015.\n[6] J. Chung, K. Kastner, L. Dinh, K. Goel, A. C. Courville, and Y. Bengio, “A recurrent latent variable model for sequential data,” in NIPS, 2015.\n[7] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” CoRR, vol. abs/1409.1556, 2015.\n[8] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770–778, 2016.\n[9] T. Mikolov, K. Chen, G. S. Corrado, and J. Dean, “Efficient estimation of word representations in vector space,” in ICLR, 2013.\n[10] T. Mikolov, I. Sutskever, K. Chen, G. S. Corrado, and J. Dean, “Distributed representations of words and phrases and their compositionality,” in NIPS, 2013.\n[11] I. Sutskever, O. Vinyals, and Q. V. Le, “Sequence to sequence learning with neural networks,” in NIPS, 2014.\n[12] M. E. Peters, M. Neumann, M. Iyyer, M. Gardner, C. Clark, K. Lee, and L. Zettlemoyer, “Deep contextualized word representations,” in NAACL, 2018.\n[13] Google, “The wordpiece algorithm in open source bert,” Oct 2018.\n[14] T. Kudo and J. Richardson, “Sentencepiece: A simple and language independent subword tokenizer and detokenizer for neural text processing,” in EMNLP, 2018.\n[15] D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning representations by backpropagating errors,” Nature, vol. 323, pp. 533–536, 1986.\n[16] A. Vaswani, N. M. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin, “Attention is all you need,” ArXiv, vol. abs/1706.03762, 2017.\n[17] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “Bert: Pre-training of deep bidirectional transformers for language understanding,” ArXiv, vol. abs/1810.04805, 2019.\n[18] A. Radford and K. Narasimhan, “Improving language understanding by generative pre-training, 2018.\n[19] C. Raffel, N. M. Shazeer, A. Roberts, K. Lee, S. Narang, M. Matena, Y. Zhou, W. Li, and P. J. Liu, “Exploring the limits of transfer learning with a unified text-to-text transformer,” ArXiv, vol. abs/1910.10683, 2020.\n[20] K. Clark, U. Khandelwal, O. Levy, and C. D. Manning, “What does bert look at? an analysis of bert’s attention,” in BlackboxNLP@ACL, 2019.\n[21] N. Kitaev, L. Kaiser, and A. Levskaya, “Reformer: The efficient transformer,” ArXiv, vol. abs/2001.04451, 2020.\n[22] I. Beltagy, M. E. Peters, and A. Cohan, “Longformer: The long-document transformer,” ArXiv, vol. abs/2004.05150, 2020.\n[23] Z. Dai, Z. Yang, Y. Yang, J. G. Carbonell, Q. V. Le, and R. Salakhutdinov, “Transformer-xl: Attentive language models beyond a fixed-length context,” ArXiv, vol. abs/1901.02860, 2019.\n[24] Z. Lan, M. Chen, S. Goodman, K. Gimpel, P. Sharma, and R. Soricut, “Albert: A lite bert for self-supervised learning of language representations,” ArXiv, vol. abs/1909.11942, 2020.\n[25] M. Lewis, Y. Liu, N. Goyal, M. Ghazvininejad, A. Mohamed, O. Levy, V. Stoyanov, and L. Zettlemoyer, “Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension,” in ACL, 2020.\n[26] S. Kobayashi, “Homemade bookcorpus.” https://github.com/BIGBALLON/cifar-10-cnn, 2018.\n[27] W. Foundation, “Wikimedia downloads.” https://dumps.wikimedia.org, 2020.\n[28] K. Song, X. Tan, T. Qin, J. Lu, and T.-Y. Liu, “Mass: Masked sequence to sequence pre-training for language generation,” in ICML, 2019.\n[29] A. Andoni, P. Indyk, T. Laarhoven, I. P. Razenshteyn, and L. Schmidt, “Practical and optimal lsh for angular distance,” ArXiv, vol. abs/1509.02897, 2015.\n[30] A. N. Gomez, M. Ren, R. Urtasun, and R. B. Grosse, “The reversible residual network: Backpropagation without storing activations,” in NIPS, 2017.\n[31] GoodfellowIan, Pouget-AbadieJean, MirzaMehdi, Xubing, Warde-FarleyDavid, OzairSherjil, CourvilleAaron, and BengioYoshua, “Generative adversarial networks,” Communications of The ACM, 2020.\n[32] M. Arjovsky, S. Chintala, and L. Bottou, “Wasserstein gan,” ArXiv, vol. abs/1701.07875, 2017.\n[33] D. P. Kingma and M. Welling, “Auto-encoding variational bayes,” CoRR, vol. abs/1312.6114, 2014.\n[34] A. Makhzani, J. Shlens, N. Jaitly, and I. J. Goodfellow, “Adversarial autoencoders,” ArXiv, vol. abs/1511.05644, 2015.\n[35] M. A. Kramer, “Nonlinear principal component analysis using autoassociative neural networks,” Aiche Journal, vol. 37, pp. 233–243, 1991.\n[36] A. B. L. Larsen, S. K. Sonderby, H. Larochelle, and O. Winther, “Autoencoding beyond pixels using a learned similarity metric,” ArXiv, vol. abs/1512.09300, 2016.\n[37] A. Plumerault, H. L. Borgne, and C. Hudelot, “Avae: Adversarial variational auto encoder,” 2020 25th International Conference on Pattern Recognition (ICPR), pp. 8687–8694, 2021.\n[38] A. Pagnoni, K. Liu, and S. Li, “Conditional variational autoencoder for neural machine translation,” ArXiv, vol. abs/1812.04405, 2018.\n[39] M. Mirza and S. Osindero, “Conditional generative adversarial nets,” ArXiv, vol. abs/1411.1784, 2014.\n[40] P. Bhargava, A. Drozd, and A. Rogers, “Generalization in nli: Ways (not) to go beyond simple heuristics,” 2021.\n[41] I. Turc, M. Chang, K. Lee, and K. Toutanova, “Well-read students learn better: The impact of student initialization on knowledge distillation,” CoRR, vol. abs/1908.08962, 2019.\n[42] B. Klimt and Y. Yang, “The enron corpus: A new dataset for email classification research,” in ECML, 2004.\n[43] mrm8488, “Mrm8488/fake-news · datasets at hugging face,” Oct 2021.\n[44] F. O. Catak and A. F. Yazi, “A benchmark api call dataset for windows pe malware classification,” ArXiv, vol. abs/1905.01999, 2019.\n[45] D. Dua and C. Graff, “UCI machine learning repository,” 2017.\n[46] F. O. Catak, J. Ahmed, K. Sahinbas, and Z. H. Khand, “Data augmentation based malware detection using convolutional neural networks,” PeerJ Computer Science, vol. 7, p. e346, Jan. 2021.\n[47] A. F. Yazi, F. O. Catak, and E. G¨ul, “Classification of methamorphic malware with deep learning(lstm),” 2019 27th Signal Processing and Communications Applications Conference (SIU), pp. 1–4, 2019.\n[48] P. Gage, “A new algorithm for data compression,” The C Users Journal archive, vol. 12, pp. 23– 38, 1994.\n[49] S. Reese, G. Boleda, M. Cuadros, L. Padr´o, and G. Rigau, “Wikicorpus: A word-sense disambiguated multilingual Wikipedia corpus,” in Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC’10), (Valletta, Malta), European Language Resources Association (ELRA), May 2010.\n[50] Y. Zhu, R. Kiros, R. Zemel, R. Salakhutdinov, R. Urtasun, A. Torralba, and S. Fidler, “Aligning books and movies: Towards story-like visual explanations by watching movies and reading books,” in The IEEE International Conference on Computer Vision (ICCV), December 2015.\n[51] Q. Lhoest, A. Villanova del Moral, Y. Jernite, A. Thakur, P. von Platen, S. Patil, J. Chaumond, M. Drame, J. Plu, L. Tunstall, J. Davison, M. ˇSaˇsko, G. Chhablani, B. Malik, S. Brandeis, T. Le Scao, V. Sanh, C. Xu, N. Patry, A. McMillan-Major, P. Schmid, S. Gugger, C. Delangue, T. Matussi"ere, L. Debut, S. Bekman, P. Cistac, T. Goehringer, V. Mustar, F. Lagunas, A. Rush, and T. Wolf, “Datasets: A community library for natural language processing,” in Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, (Online and Punta Cana, Dominican Republic), pp. 175–184, Association for Computational Linguistics, Nov. 2021.\n[52] T. Wolf, L. Debut, V. Sanh, J. Chaumond, C. Delangue, A. Moi, P. Cistac, T. Rault, R. Louf, M. Funtowicz, J. Davison, S. Shleifer, P. von Platen, C. Ma, Y. Jernite, J. Plu, C. Xu, T. L. Scao, S. Gugger, M. Drame, Q. Lhoest, and A. M. Rush, “Transformers: State-of-the-art natural language processing,” in Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, (Online), pp. 38–45, Association for Computational Linguistics, Oct. 2020.
描述: 碩士
國立政治大學
資訊管理學系
108356041
資料來源: http://thesis.lib.nccu.edu.tw/record/#G0108356041
資料類型: thesis
Appears in Collections:學位論文

Files in This Item:
File Description SizeFormat
604101.pdf8.84 MBAdobe PDF2View/Open
Show full item record

Google ScholarTM

Check

Altmetric

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.