Please use this identifier to cite or link to this item: https://ah.lib.nccu.edu.tw/handle/140.119/132064
題名: 跨語言遷移學習在惡意留言偵測上的應用
Cross-lingual Transfer Learning for Toxic Comment Detection
作者: 陳冠宇
Chen, Kuan-Yu
貢獻者: 蔡炎龍
陳冠宇
Chen, Kuan-Yu
關鍵詞: Transformer
XLM-R
跨語言預測
惡意留言
不平衡數據
深度學習
對話安全
Transformer
XLM-R
cross-lingual prediction
toxic comment
imbalanced data
deep learning
security conversations
日期: 2020
上傳時間: 5-Oct-2020
摘要: Transformer這個模型,它開啟了自然語言處理領域的一道大門,使得這個領域往前邁進了一大步,它讓模型更了解了文字中的關係。並且它的模型架構延伸了許多語言模型,例如跨語言模型的XLM,XLM-R,而這些延伸出來的模型在各個任務中都獲得了很好的成績。在本篇論文中,我們證實了可以透過其他高資源的語言來彌補低資源的語言的資料量,我們以預測留言是否是惡意留言來做為例子,我們分別使用Jigsaw Multilingual Toxic Comment Classification 競賽所釋出的英文資料和PTT黑特版上的留言當做輸入的訓練集,並要模型預測中文的惡意留言,而且英文的資料量比中文的資料量多出很多,我們將其預測結果分為三個種類分別是單純以英文資料訓練模型,單純以中文資料訓練模型,最後是將兩者的資料結合並訓練模型,發現在以英文資料的訓練因為其資料量較大使得其預測結果為最好有75.9% 的水準,而以總體預測水準來說為混合型的資料分數較高有88.3%。總體來說,我們可以透過跨語言模型來補足低資源語言的不足,並且有了另一種解決低語言資料的方法。
The Transformer model, which opens a door in the field of natural language processing, makes this field has another significant further step. It allows the model to better understand the relationship in the word. And the model architecture extends many language models, such as cross-lingual model XLM, XLM-R, and these models have achieved good results in various tasks. In this paper, we proved that other high-resource languages can be used to make up for the data in low-resource languages. We take the prediction of whether the comment is a toxic message as an example. We use the English data released by the Jigsaw Multilingual Toxic Comment Classification competition and the comment on the PTT Hate board as the input training set. We want the model to predict toxic comment in Chinese, and the data in English is much larger than that in Chinese. We divide the prediction results into three categories: only use English data to fine-tune the model, fine-tune the model with Chinese data, and the last is combine the two data and fine-tune the model. We found that the training with English data has the best accuracy score of 75.9% because of the large amount of data, while the overall accuracy scores that mixed data has a higher score of 88.3%. In general, we can make up for the lack of low-resource languages through cross-lingual models and have another way to solve low-resource languages problem.
參考文獻: [1] Mateusz Buda, Atsuto Maki, and Maciej A Mazurowski. A systematic study of the class imbalance problem in convolutional neural networks. Neural Networks, 106:249–259,\n2018.\n[2] KR Chowdhary. Natural language processing. In Fundamentals of Artificial Intelligence, pages 603–649. Springer, 2020.\n[3] David A Cieslak, Nitesh V Chawla, and Aaron Striegel. Combating imbalance in network intrusion datasets. In GrC, pages 732–737, 2006.\n[4] Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin\nStoyanov. Unsupervised crosslingual representation learning at scale, 2019.\n[5] Jacob Devlin, MingWei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pretraining of deep bidirectional transformers for language understanding, 2018.\n[6] Chris Drummond, Robert C Holte, et al. C4. 5, class imbalance, and cost sensitivity: why undersampling\nbeats oversampling. In Workshop on learning from imbalanced datasets II, volume 11, pages 1–8. Citeseer, 2003.\n[7] Guo Haixiang, Li Yijing, Jennifer Shang, Gu Mingyun, Huang Yuanyue, and Gong Bing. Learning from classimbalanced\ndata: Review of methods and applications. Expert\nSystems with Applications, 73:220–239, 2017.\n[8] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 770–778, 2016.\n[9] Hossein Hosseini, Sreeram Kannan, Baosen Zhang, and Radha Poovendran. Deceiving google’s perspective api built for detecting toxic comments, 2017.\n[10] Anil K Jain, Jianchang Mao, and KM Mohiuddin. Artificial neural networks: A tutorial. Computer, (3):31–44, 1996.\n[11] Miroslav Kubat, Robert C Holte, and Stan Matwin. Machine learning for the detection of oil spills in satellite radar images. Machine learning, 30(23):\n195–215, 1998.\n[12] Guillaume Lample and Alexis Conneau. Crosslingual\nlanguage model pretraining. CoRR, abs/1901.07291, 2019.\n[13] Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. Nature, 521(7553): 436–444, 2015.\n[14] Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. Roberta: A robustly optimized bert\npretraining approach, 2019.\n[15] R Bharat Rao, Sriram Krishnan, and Radu Stefan Niculescu. Data mining for improved cardiac care. ACM SIGKDD Explorations Newsletter, 8(1):3–10, 2006.\n[16] Daniel Svozil, Vladimir Kvasnicka, and Jiri Pospichal. Introduction to multilayer feedforward neural networks. Chemometrics and intelligent laboratory systems, 39(1):43–62, 1997.\n[17] Wilson L. Taylor. “cloze procedure": A new tool for measuring readability. Journalism Quarterly, 30(4):415–433, 1953.\n[18] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. Attention is all you need. In I. Guyon,\nU. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 5998–6008. Curran\nAssociates, Inc., 2017.\n[19] Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Łukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff\nYoung, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, and Jeffrey Dean. Google’s neural machine translation system: Bridging the gap\nbetween human and machine translation, 2016.\n[20] Show-Jane Yen and Yue-Shi Lee. Under-sampling\napproaches for improving prediction of the minority class in an imbalanced dataset. In Intelligent Control and Automation, pages 731–740. Springer, 2006.\n[21] Dong Yu and Li Deng. Automatic Speech Recognition: A Deep Learning Approach. Springer Publishing Company, Incorporated, 2014.
描述: 碩士
國立政治大學
應用數學系
107751010
資料來源: http://thesis.lib.nccu.edu.tw/record/#G0107751010
資料類型: thesis
Appears in Collections:學位論文

Files in This Item:
File Description SizeFormat
101001.pdf1.35 MBAdobe PDF2View/Open
Show full item record

Google ScholarTM

Check

Altmetric

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.