學術產出-學位論文

文章檢視/開啟

書目匯出

Google ScholarTM

政大圖書館

引文資訊

TAIR相關學術產出

題名 對抗式 Text-to-Text Transformer 網路於深度序列特徵學習
Adversarial Text-to-Text Transformer Nets for Deep Sequential Feature Learning
作者 郭耀威
Kuok, Io-Wai
貢獻者 蕭舜文
Hsiao, Shun-Wen
郭耀威
Kuok, Io-Wai
關鍵詞 序列
特徵學習
對抗式學習
語言模型
非自然語言處理
Sequential
Feature learning
Adversarial learning
Language model
non-NLP
日期 2022
上傳時間 1-七月-2022 16:08:40 (UTC+8)
摘要 數位資料具影響力而且在急速增長,理解這些結構複雜的資訊進行營運改進是一個關鍵課題,但是進行「理解」需要耗盡資源和時間,我們針對以上問題,開發僅以主流可取得的資源上解決問題的方法。我們透過當前最先進的語言模型上套用對抗式方法,加上結合模型多任務學習策略,提出一個高效率的深度序列特徵學習作法。使用提出方法的模型能夠配合不同特性的巨量資料,提取序列中有意義的特徵供作進行任何類別的任務。我們對現實序列事件中不同特性的資料進行實驗,然後與例如 BERT 和 T5 等先進語言模型進行比較,我們也會在提出方法中剝離部分結構作研究。實證結果顯示,我們提出的方法只需使用 BERT-small 三分之一規模的模型,進行自然語言處理能取得相近的效能,而且在部分在非自然語言資料的任務中取得最佳結果。
Digital data is impactive and rapid growing. Learning that complex compound information to improve operations is critical, but those assignments are against time and resources, therefore we aim to develop a method to complete this objective with mainstream hardware. We propose an efficient deep sequential learning approach by applying adversarial techniques and multitask learning strategy to the state-of-the-act language model. The model is capable of adapting massive polymorphous types of sequences, obtaining the sequential representation and preserving the symbolic information for unified heterogeneous downstream tasks. We are conducting several experiments on various characteristic data in real-world sequential events and providing a comprehensive comparison with SOTA NLP models such as T5 and BERT, we are also investigating the ablation result in the comparison. Empirical results show that our approach can accomplish nearly perfect performance in NLP tasks with one-third size of the BERT-small model, and the best result in some non-NLP tasks.
參考文獻 [1] Y. Bengio, A. C. Courville, and P. Vincent, “Representation learning: A review and new perspectives,”
IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, pp. 1798–1828, 2013.
[2] K. P. F.R.S., “Liii. on lines and planes of closest fit to systems of points in space,” Philosophical Magazine Series 1, vol. 2, pp. 559–572.
[3] M. J. Greenacre and J. Blasius, “Multiple correspondence analysis and related methods,” 2006.
[4] W.-J. Li, D.-Y. Yeung, and Z. Zhang, “Probabilistic relational pca,” in NIPS, 2009.
[5] M. Germain, K. Gregor, I. Murray, and H. Larochelle, “Made: Masked autoencoder for distribution estimation,” in ICML, 2015.
[6] J. Chung, K. Kastner, L. Dinh, K. Goel, A. C. Courville, and Y. Bengio, “A recurrent latent variable model for sequential data,” in NIPS, 2015.
[7] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” CoRR, vol. abs/1409.1556, 2015.
[8] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770–778, 2016.
[9] T. Mikolov, K. Chen, G. S. Corrado, and J. Dean, “Efficient estimation of word representations in vector space,” in ICLR, 2013.
[10] T. Mikolov, I. Sutskever, K. Chen, G. S. Corrado, and J. Dean, “Distributed representations of words and phrases and their compositionality,” in NIPS, 2013.
[11] I. Sutskever, O. Vinyals, and Q. V. Le, “Sequence to sequence learning with neural networks,” in NIPS, 2014.
[12] M. E. Peters, M. Neumann, M. Iyyer, M. Gardner, C. Clark, K. Lee, and L. Zettlemoyer, “Deep contextualized word representations,” in NAACL, 2018.
[13] Google, “The wordpiece algorithm in open source bert,” Oct 2018.
[14] T. Kudo and J. Richardson, “Sentencepiece: A simple and language independent subword tokenizer and detokenizer for neural text processing,” in EMNLP, 2018.
[15] D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning representations by backpropagating errors,” Nature, vol. 323, pp. 533–536, 1986.
[16] A. Vaswani, N. M. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin, “Attention is all you need,” ArXiv, vol. abs/1706.03762, 2017.
[17] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “Bert: Pre-training of deep bidirectional transformers for language understanding,” ArXiv, vol. abs/1810.04805, 2019.
[18] A. Radford and K. Narasimhan, “Improving language understanding by generative pre-training, 2018.
[19] C. Raffel, N. M. Shazeer, A. Roberts, K. Lee, S. Narang, M. Matena, Y. Zhou, W. Li, and P. J. Liu, “Exploring the limits of transfer learning with a unified text-to-text transformer,” ArXiv, vol. abs/1910.10683, 2020.
[20] K. Clark, U. Khandelwal, O. Levy, and C. D. Manning, “What does bert look at? an analysis of bert’s attention,” in BlackboxNLP@ACL, 2019.
[21] N. Kitaev, L. Kaiser, and A. Levskaya, “Reformer: The efficient transformer,” ArXiv, vol. abs/2001.04451, 2020.
[22] I. Beltagy, M. E. Peters, and A. Cohan, “Longformer: The long-document transformer,” ArXiv, vol. abs/2004.05150, 2020.
[23] Z. Dai, Z. Yang, Y. Yang, J. G. Carbonell, Q. V. Le, and R. Salakhutdinov, “Transformer-xl: Attentive language models beyond a fixed-length context,” ArXiv, vol. abs/1901.02860, 2019.
[24] Z. Lan, M. Chen, S. Goodman, K. Gimpel, P. Sharma, and R. Soricut, “Albert: A lite bert for self-supervised learning of language representations,” ArXiv, vol. abs/1909.11942, 2020.
[25] M. Lewis, Y. Liu, N. Goyal, M. Ghazvininejad, A. Mohamed, O. Levy, V. Stoyanov, and L. Zettlemoyer, “Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension,” in ACL, 2020.
[26] S. Kobayashi, “Homemade bookcorpus.” https://github.com/BIGBALLON/cifar-10-cnn, 2018.
[27] W. Foundation, “Wikimedia downloads.” https://dumps.wikimedia.org, 2020.
[28] K. Song, X. Tan, T. Qin, J. Lu, and T.-Y. Liu, “Mass: Masked sequence to sequence pre-training for language generation,” in ICML, 2019.
[29] A. Andoni, P. Indyk, T. Laarhoven, I. P. Razenshteyn, and L. Schmidt, “Practical and optimal lsh for angular distance,” ArXiv, vol. abs/1509.02897, 2015.
[30] A. N. Gomez, M. Ren, R. Urtasun, and R. B. Grosse, “The reversible residual network: Backpropagation without storing activations,” in NIPS, 2017.
[31] GoodfellowIan, Pouget-AbadieJean, MirzaMehdi, Xubing, Warde-FarleyDavid, OzairSherjil, CourvilleAaron, and BengioYoshua, “Generative adversarial networks,” Communications of The ACM, 2020.
[32] M. Arjovsky, S. Chintala, and L. Bottou, “Wasserstein gan,” ArXiv, vol. abs/1701.07875, 2017.
[33] D. P. Kingma and M. Welling, “Auto-encoding variational bayes,” CoRR, vol. abs/1312.6114, 2014.
[34] A. Makhzani, J. Shlens, N. Jaitly, and I. J. Goodfellow, “Adversarial autoencoders,” ArXiv, vol. abs/1511.05644, 2015.
[35] M. A. Kramer, “Nonlinear principal component analysis using autoassociative neural networks,” Aiche Journal, vol. 37, pp. 233–243, 1991.
[36] A. B. L. Larsen, S. K. Sonderby, H. Larochelle, and O. Winther, “Autoencoding beyond pixels using a learned similarity metric,” ArXiv, vol. abs/1512.09300, 2016.
[37] A. Plumerault, H. L. Borgne, and C. Hudelot, “Avae: Adversarial variational auto encoder,” 2020 25th International Conference on Pattern Recognition (ICPR), pp. 8687–8694, 2021.
[38] A. Pagnoni, K. Liu, and S. Li, “Conditional variational autoencoder for neural machine translation,” ArXiv, vol. abs/1812.04405, 2018.
[39] M. Mirza and S. Osindero, “Conditional generative adversarial nets,” ArXiv, vol. abs/1411.1784, 2014.
[40] P. Bhargava, A. Drozd, and A. Rogers, “Generalization in nli: Ways (not) to go beyond simple heuristics,” 2021.
[41] I. Turc, M. Chang, K. Lee, and K. Toutanova, “Well-read students learn better: The impact of student initialization on knowledge distillation,” CoRR, vol. abs/1908.08962, 2019.
[42] B. Klimt and Y. Yang, “The enron corpus: A new dataset for email classification research,” in ECML, 2004.
[43] mrm8488, “Mrm8488/fake-news · datasets at hugging face,” Oct 2021.
[44] F. O. Catak and A. F. Yazi, “A benchmark api call dataset for windows pe malware classification,” ArXiv, vol. abs/1905.01999, 2019.
[45] D. Dua and C. Graff, “UCI machine learning repository,” 2017.
[46] F. O. Catak, J. Ahmed, K. Sahinbas, and Z. H. Khand, “Data augmentation based malware detection using convolutional neural networks,” PeerJ Computer Science, vol. 7, p. e346, Jan. 2021.
[47] A. F. Yazi, F. O. Catak, and E. G¨ul, “Classification of methamorphic malware with deep learning(lstm),” 2019 27th Signal Processing and Communications Applications Conference (SIU), pp. 1–4, 2019.
[48] P. Gage, “A new algorithm for data compression,” The C Users Journal archive, vol. 12, pp. 23– 38, 1994.
[49] S. Reese, G. Boleda, M. Cuadros, L. Padr´o, and G. Rigau, “Wikicorpus: A word-sense disambiguated multilingual Wikipedia corpus,” in Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC’10), (Valletta, Malta), European Language Resources Association (ELRA), May 2010.
[50] Y. Zhu, R. Kiros, R. Zemel, R. Salakhutdinov, R. Urtasun, A. Torralba, and S. Fidler, “Aligning books and movies: Towards story-like visual explanations by watching movies and reading books,” in The IEEE International Conference on Computer Vision (ICCV), December 2015.
[51] Q. Lhoest, A. Villanova del Moral, Y. Jernite, A. Thakur, P. von Platen, S. Patil, J. Chaumond, M. Drame, J. Plu, L. Tunstall, J. Davison, M. ˇSaˇsko, G. Chhablani, B. Malik, S. Brandeis, T. Le Scao, V. Sanh, C. Xu, N. Patry, A. McMillan-Major, P. Schmid, S. Gugger, C. Delangue, T. Matussi"ere, L. Debut, S. Bekman, P. Cistac, T. Goehringer, V. Mustar, F. Lagunas, A. Rush, and T. Wolf, “Datasets: A community library for natural language processing,” in Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, (Online and Punta Cana, Dominican Republic), pp. 175–184, Association for Computational Linguistics, Nov. 2021.
[52] T. Wolf, L. Debut, V. Sanh, J. Chaumond, C. Delangue, A. Moi, P. Cistac, T. Rault, R. Louf, M. Funtowicz, J. Davison, S. Shleifer, P. von Platen, C. Ma, Y. Jernite, J. Plu, C. Xu, T. L. Scao, S. Gugger, M. Drame, Q. Lhoest, and A. M. Rush, “Transformers: State-of-the-art natural language processing,” in Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, (Online), pp. 38–45, Association for Computational Linguistics, Oct. 2020.
描述 碩士
國立政治大學
資訊管理學系
108356041
資料來源 http://thesis.lib.nccu.edu.tw/record/#G0108356041
資料類型 thesis
dc.contributor.advisor 蕭舜文zh_TW
dc.contributor.advisor Hsiao, Shun-Wenen_US
dc.contributor.author (作者) 郭耀威zh_TW
dc.contributor.author (作者) Kuok, Io-Waien_US
dc.creator (作者) 郭耀威zh_TW
dc.creator (作者) Kuok, Io-Waien_US
dc.date (日期) 2022en_US
dc.date.accessioned 1-七月-2022 16:08:40 (UTC+8)-
dc.date.available 1-七月-2022 16:08:40 (UTC+8)-
dc.date.issued (上傳時間) 1-七月-2022 16:08:40 (UTC+8)-
dc.identifier (其他 識別碼) G0108356041en_US
dc.identifier.uri (URI) http://nccur.lib.nccu.edu.tw/handle/140.119/140595-
dc.description (描述) 碩士zh_TW
dc.description (描述) 國立政治大學zh_TW
dc.description (描述) 資訊管理學系zh_TW
dc.description (描述) 108356041zh_TW
dc.description.abstract (摘要) 數位資料具影響力而且在急速增長,理解這些結構複雜的資訊進行營運改進是一個關鍵課題,但是進行「理解」需要耗盡資源和時間,我們針對以上問題,開發僅以主流可取得的資源上解決問題的方法。我們透過當前最先進的語言模型上套用對抗式方法,加上結合模型多任務學習策略,提出一個高效率的深度序列特徵學習作法。使用提出方法的模型能夠配合不同特性的巨量資料,提取序列中有意義的特徵供作進行任何類別的任務。我們對現實序列事件中不同特性的資料進行實驗,然後與例如 BERT 和 T5 等先進語言模型進行比較,我們也會在提出方法中剝離部分結構作研究。實證結果顯示,我們提出的方法只需使用 BERT-small 三分之一規模的模型,進行自然語言處理能取得相近的效能,而且在部分在非自然語言資料的任務中取得最佳結果。zh_TW
dc.description.abstract (摘要) Digital data is impactive and rapid growing. Learning that complex compound information to improve operations is critical, but those assignments are against time and resources, therefore we aim to develop a method to complete this objective with mainstream hardware. We propose an efficient deep sequential learning approach by applying adversarial techniques and multitask learning strategy to the state-of-the-act language model. The model is capable of adapting massive polymorphous types of sequences, obtaining the sequential representation and preserving the symbolic information for unified heterogeneous downstream tasks. We are conducting several experiments on various characteristic data in real-world sequential events and providing a comprehensive comparison with SOTA NLP models such as T5 and BERT, we are also investigating the ablation result in the comparison. Empirical results show that our approach can accomplish nearly perfect performance in NLP tasks with one-third size of the BERT-small model, and the best result in some non-NLP tasks.en_US
dc.description.tableofcontents 1 Introduction 1
1.1 Motivation 1
1.2 Problem Formulation 1
1.3 Outline 2
2 Related Work 3
2.1 Feature Learning 3
2.2 Language Model 3
2.3 Generative Adversarial Networks 4
3 Proposed Model 5
3.1 Overview 5
3.2 Tokenization 6
3.3 Language Model 7
3.4 Reconstructing 9
3.5 Adversarial Training 10
3.6 Classification 11
3.7 Training Scheme 12
4 Experiments 14
4.1 Evaluation Method 14
4.2 Datasets 18
4.2.1 Enronemail 18
4.2.2 Mal-API-2019 18
4.2.3 mrm8488’s fake news 19
4.2.4 Gene 20
4.3 Tokeniser 20
4.4 Evaluation Metrics 22
4.5 Experimental Environment 22
4.6 Training Parameter 22
5 Results 23
5.1 Enronemail 23
5.2 fake-news 24
5.3 Gene 26
5.4 Mal-API-2019 27
5.5 Overall 29
6 Discussion 32
6.1 Conclusion 32
6.2 Future Work 32
Reference 33
A Inference Process i
B Training Loss ii
B.1 Enronemail ii
B.2 Mal-API-2019 iii
B.3 mrm8488’s fake news xi
B.4 Gene xii
C Confusion Matrix xv
C.1 Enronemail xv
C.2 Mal-API-2019 xvi
C.3 mrm8488’s fake news xxiv
C.4 Gene xxv
D ROC Curve xxviii
D.1 Enronemail xxviii
D.2 Mal-API-2019 xxix
D.3 mrm8488’s fake news xxxvii
D.4 Gene x.xxviii
E DET Curve xli
E.1 Enronemail xli
E.2 Mal-API-2019 xlii
E.3 mrm8488’s fake news l
E.4 Gene li
zh_TW
dc.format.extent 9051579 bytes-
dc.format.mimetype application/pdf-
dc.source.uri (資料來源) http://thesis.lib.nccu.edu.tw/record/#G0108356041en_US
dc.subject (關鍵詞) 序列zh_TW
dc.subject (關鍵詞) 特徵學習zh_TW
dc.subject (關鍵詞) 對抗式學習zh_TW
dc.subject (關鍵詞) 語言模型zh_TW
dc.subject (關鍵詞) 非自然語言處理zh_TW
dc.subject (關鍵詞) Sequentialen_US
dc.subject (關鍵詞) Feature learningen_US
dc.subject (關鍵詞) Adversarial learningen_US
dc.subject (關鍵詞) Language modelen_US
dc.subject (關鍵詞) non-NLPen_US
dc.title (題名) 對抗式 Text-to-Text Transformer 網路於深度序列特徵學習zh_TW
dc.title (題名) Adversarial Text-to-Text Transformer Nets for Deep Sequential Feature Learningen_US
dc.type (資料類型) thesisen_US
dc.relation.reference (參考文獻) [1] Y. Bengio, A. C. Courville, and P. Vincent, “Representation learning: A review and new perspectives,”
IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, pp. 1798–1828, 2013.
[2] K. P. F.R.S., “Liii. on lines and planes of closest fit to systems of points in space,” Philosophical Magazine Series 1, vol. 2, pp. 559–572.
[3] M. J. Greenacre and J. Blasius, “Multiple correspondence analysis and related methods,” 2006.
[4] W.-J. Li, D.-Y. Yeung, and Z. Zhang, “Probabilistic relational pca,” in NIPS, 2009.
[5] M. Germain, K. Gregor, I. Murray, and H. Larochelle, “Made: Masked autoencoder for distribution estimation,” in ICML, 2015.
[6] J. Chung, K. Kastner, L. Dinh, K. Goel, A. C. Courville, and Y. Bengio, “A recurrent latent variable model for sequential data,” in NIPS, 2015.
[7] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” CoRR, vol. abs/1409.1556, 2015.
[8] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770–778, 2016.
[9] T. Mikolov, K. Chen, G. S. Corrado, and J. Dean, “Efficient estimation of word representations in vector space,” in ICLR, 2013.
[10] T. Mikolov, I. Sutskever, K. Chen, G. S. Corrado, and J. Dean, “Distributed representations of words and phrases and their compositionality,” in NIPS, 2013.
[11] I. Sutskever, O. Vinyals, and Q. V. Le, “Sequence to sequence learning with neural networks,” in NIPS, 2014.
[12] M. E. Peters, M. Neumann, M. Iyyer, M. Gardner, C. Clark, K. Lee, and L. Zettlemoyer, “Deep contextualized word representations,” in NAACL, 2018.
[13] Google, “The wordpiece algorithm in open source bert,” Oct 2018.
[14] T. Kudo and J. Richardson, “Sentencepiece: A simple and language independent subword tokenizer and detokenizer for neural text processing,” in EMNLP, 2018.
[15] D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning representations by backpropagating errors,” Nature, vol. 323, pp. 533–536, 1986.
[16] A. Vaswani, N. M. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin, “Attention is all you need,” ArXiv, vol. abs/1706.03762, 2017.
[17] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “Bert: Pre-training of deep bidirectional transformers for language understanding,” ArXiv, vol. abs/1810.04805, 2019.
[18] A. Radford and K. Narasimhan, “Improving language understanding by generative pre-training, 2018.
[19] C. Raffel, N. M. Shazeer, A. Roberts, K. Lee, S. Narang, M. Matena, Y. Zhou, W. Li, and P. J. Liu, “Exploring the limits of transfer learning with a unified text-to-text transformer,” ArXiv, vol. abs/1910.10683, 2020.
[20] K. Clark, U. Khandelwal, O. Levy, and C. D. Manning, “What does bert look at? an analysis of bert’s attention,” in BlackboxNLP@ACL, 2019.
[21] N. Kitaev, L. Kaiser, and A. Levskaya, “Reformer: The efficient transformer,” ArXiv, vol. abs/2001.04451, 2020.
[22] I. Beltagy, M. E. Peters, and A. Cohan, “Longformer: The long-document transformer,” ArXiv, vol. abs/2004.05150, 2020.
[23] Z. Dai, Z. Yang, Y. Yang, J. G. Carbonell, Q. V. Le, and R. Salakhutdinov, “Transformer-xl: Attentive language models beyond a fixed-length context,” ArXiv, vol. abs/1901.02860, 2019.
[24] Z. Lan, M. Chen, S. Goodman, K. Gimpel, P. Sharma, and R. Soricut, “Albert: A lite bert for self-supervised learning of language representations,” ArXiv, vol. abs/1909.11942, 2020.
[25] M. Lewis, Y. Liu, N. Goyal, M. Ghazvininejad, A. Mohamed, O. Levy, V. Stoyanov, and L. Zettlemoyer, “Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension,” in ACL, 2020.
[26] S. Kobayashi, “Homemade bookcorpus.” https://github.com/BIGBALLON/cifar-10-cnn, 2018.
[27] W. Foundation, “Wikimedia downloads.” https://dumps.wikimedia.org, 2020.
[28] K. Song, X. Tan, T. Qin, J. Lu, and T.-Y. Liu, “Mass: Masked sequence to sequence pre-training for language generation,” in ICML, 2019.
[29] A. Andoni, P. Indyk, T. Laarhoven, I. P. Razenshteyn, and L. Schmidt, “Practical and optimal lsh for angular distance,” ArXiv, vol. abs/1509.02897, 2015.
[30] A. N. Gomez, M. Ren, R. Urtasun, and R. B. Grosse, “The reversible residual network: Backpropagation without storing activations,” in NIPS, 2017.
[31] GoodfellowIan, Pouget-AbadieJean, MirzaMehdi, Xubing, Warde-FarleyDavid, OzairSherjil, CourvilleAaron, and BengioYoshua, “Generative adversarial networks,” Communications of The ACM, 2020.
[32] M. Arjovsky, S. Chintala, and L. Bottou, “Wasserstein gan,” ArXiv, vol. abs/1701.07875, 2017.
[33] D. P. Kingma and M. Welling, “Auto-encoding variational bayes,” CoRR, vol. abs/1312.6114, 2014.
[34] A. Makhzani, J. Shlens, N. Jaitly, and I. J. Goodfellow, “Adversarial autoencoders,” ArXiv, vol. abs/1511.05644, 2015.
[35] M. A. Kramer, “Nonlinear principal component analysis using autoassociative neural networks,” Aiche Journal, vol. 37, pp. 233–243, 1991.
[36] A. B. L. Larsen, S. K. Sonderby, H. Larochelle, and O. Winther, “Autoencoding beyond pixels using a learned similarity metric,” ArXiv, vol. abs/1512.09300, 2016.
[37] A. Plumerault, H. L. Borgne, and C. Hudelot, “Avae: Adversarial variational auto encoder,” 2020 25th International Conference on Pattern Recognition (ICPR), pp. 8687–8694, 2021.
[38] A. Pagnoni, K. Liu, and S. Li, “Conditional variational autoencoder for neural machine translation,” ArXiv, vol. abs/1812.04405, 2018.
[39] M. Mirza and S. Osindero, “Conditional generative adversarial nets,” ArXiv, vol. abs/1411.1784, 2014.
[40] P. Bhargava, A. Drozd, and A. Rogers, “Generalization in nli: Ways (not) to go beyond simple heuristics,” 2021.
[41] I. Turc, M. Chang, K. Lee, and K. Toutanova, “Well-read students learn better: The impact of student initialization on knowledge distillation,” CoRR, vol. abs/1908.08962, 2019.
[42] B. Klimt and Y. Yang, “The enron corpus: A new dataset for email classification research,” in ECML, 2004.
[43] mrm8488, “Mrm8488/fake-news · datasets at hugging face,” Oct 2021.
[44] F. O. Catak and A. F. Yazi, “A benchmark api call dataset for windows pe malware classification,” ArXiv, vol. abs/1905.01999, 2019.
[45] D. Dua and C. Graff, “UCI machine learning repository,” 2017.
[46] F. O. Catak, J. Ahmed, K. Sahinbas, and Z. H. Khand, “Data augmentation based malware detection using convolutional neural networks,” PeerJ Computer Science, vol. 7, p. e346, Jan. 2021.
[47] A. F. Yazi, F. O. Catak, and E. G¨ul, “Classification of methamorphic malware with deep learning(lstm),” 2019 27th Signal Processing and Communications Applications Conference (SIU), pp. 1–4, 2019.
[48] P. Gage, “A new algorithm for data compression,” The C Users Journal archive, vol. 12, pp. 23– 38, 1994.
[49] S. Reese, G. Boleda, M. Cuadros, L. Padr´o, and G. Rigau, “Wikicorpus: A word-sense disambiguated multilingual Wikipedia corpus,” in Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC’10), (Valletta, Malta), European Language Resources Association (ELRA), May 2010.
[50] Y. Zhu, R. Kiros, R. Zemel, R. Salakhutdinov, R. Urtasun, A. Torralba, and S. Fidler, “Aligning books and movies: Towards story-like visual explanations by watching movies and reading books,” in The IEEE International Conference on Computer Vision (ICCV), December 2015.
[51] Q. Lhoest, A. Villanova del Moral, Y. Jernite, A. Thakur, P. von Platen, S. Patil, J. Chaumond, M. Drame, J. Plu, L. Tunstall, J. Davison, M. ˇSaˇsko, G. Chhablani, B. Malik, S. Brandeis, T. Le Scao, V. Sanh, C. Xu, N. Patry, A. McMillan-Major, P. Schmid, S. Gugger, C. Delangue, T. Matussi"ere, L. Debut, S. Bekman, P. Cistac, T. Goehringer, V. Mustar, F. Lagunas, A. Rush, and T. Wolf, “Datasets: A community library for natural language processing,” in Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, (Online and Punta Cana, Dominican Republic), pp. 175–184, Association for Computational Linguistics, Nov. 2021.
[52] T. Wolf, L. Debut, V. Sanh, J. Chaumond, C. Delangue, A. Moi, P. Cistac, T. Rault, R. Louf, M. Funtowicz, J. Davison, S. Shleifer, P. von Platen, C. Ma, Y. Jernite, J. Plu, C. Xu, T. L. Scao, S. Gugger, M. Drame, Q. Lhoest, and A. M. Rush, “Transformers: State-of-the-art natural language processing,” in Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, (Online), pp. 38–45, Association for Computational Linguistics, Oct. 2020.
zh_TW
dc.identifier.doi (DOI) 10.6814/NCCU202200631en_US