Please use this identifier to cite or link to this item: https://ah.lib.nccu.edu.tw/handle/140.119/139546
題名: 基於Transformer語言模型之自動化ATT&CK戰術識別
Automatic ATT&CK Tactics Identification by Transformer-Based Language Model
作者: 林伶軒
Lin, Ling-Hsuan
貢獻者: 蕭舜文
Hsiao, Shun-Wen
林伶軒
Lin, Ling-Hsuan
關鍵詞: 封包分析
語言模型
多標籤分類
網路威脅情報
網路安全
MITRE ATT&CK
Transformers
Multi-label classification
Threat intelligence
Cybersecurity
日期: 2021
上傳時間: 1-Apr-2022
摘要: 隨著資安攻擊和數據洩露的迅速增加,資訊安全已成為全球關注的重要問題。人工智慧可以幫助人類自動分析攻擊,特別是分析攻擊意圖以生成威脅情報。基此,本研究旨在透過人工智慧模型自動地識別封包的攻擊意圖。我們提出一個基於 Transformer 的語言模型,藉由分析 MITRE 網站上的文章來學習戰術(意圖)和攻擊封包之間的關係。該模型嵌入一個封包並輸出一個表示封包內容及其意圖的高維向量。本研究亦建立一套標籤數據集生成流程,使用無監督學習方法生成用於訓練語言模型的標籤數據,有效減輕人工標記大數據資料集的負擔。實驗結果顯示,本研究微調的多標籤分類語言模型在識別封包攻擊戰術的 F1 分數為 1。
Cybersecurity has become a primary global concern with the rapid increase in security attacks and data breaches. Artificial intelligence can help humans analyze attacks, specifically to generate threat intelligence. This study aims to automatically identify the intention of attack packets through an artificial intelligence model. We propose a Transformer-based language model that learns the relationship between tactics (intentions) and attack packets by analyzing the articles on the MITRE website. The model embeds a packet and outputs a high-dimensional vector representing packet content and its intent (if any). This study also establishes a label dataset generation process by using an unsupervised learning method to generate label data for training language models, effectively reducing the burden of manually labeling big data datasets. The experimental results show that the multi-label classification language model fine-tuned in this study has an F1 score of 1 for identifying packet attack tactics.
參考文獻: [1] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, u. Kaiser, and I. Polosukhin, “Attention is All You Need,” in Proceedings of the 31st International Conference on Neural Information Processing Systems, 2017.\n[2] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “Bert: Pre-training of deep bidirectional transformers for language understanding,” arXiv preprint arXiv:1810.04805, 2018.\n[3] B. Binde, R. McRee, and T. J. O’Connor, “Assessing outbound traffic to uncover advanced persistent threat,” SANS Institute. Whitepaper, vol. 16, 2011.\n[4] S. Morgan, “2021 Report: Cyberwarfare in the C-Suite,” Cybersecurity Ventures, Tech. Rep., January 2021.\n[5] “MITRE ATT&CK,” 2021. [Online]. Available: https://attack.mitre.org/.\n[6] R. McMillan, “Definition: Threat Intelligence,” 2013. [Online]. Available: https://www.gartner.com/en/documents/2487216\n[7] G. Husari, E. Al-Shaer, M. Ahmed, B. Chu, and X. Niu, “Ttpdrill: Automatic and accurate extraction of threat actions from unstructured text of cti sources,” in Proceedings of the 33rd Annual Computer Security Applications Conference, 2017, pp. 103–115.\n[8] G. Ayoade, S. Chandra, L. Khan, K. Hamlen, and B. Thuraisingham, “Automated threat report classification over multi-source data,” in 2018 IEEE 4th International Conference on Collaboration and Internet Computing (CIC). IEEE, 2018, pp. 236–245.\n[9] T. T. Thein, Y. Ezawa, S. Nakagawa, K. Furumoto, Y. Shiraishi, M. Mohri, Y. Takano, and\nM. Morii, “Paragraph-based Estimation of Cyber Kill Chain Phase from Threat Intelligence Reports,” Journal of Information Processing, vol. 28, pp. 1025–1029, 2020.\n[10] V. Legoy, M. Caselli, C. Seifert, and A. Peter, “Automated Retrieval of ATT&CK Tactics and Techniques for Cyber Threat Reports,” arXiv preprint arXiv:2004.14322, 2020.\n[11] “Wireshark · Go Deep.” 2021. [Online]. Available: https://www.wireshark.org/.\n[12] “Snort - Network Intrusion Detection & Prevention System,” 2021. [Online]. Available: https://snort.org/.\n[13] E. M. Hutchins, M. J. Cloppert, R. M. Amin et al., “Intelligence-driven computer network defense informed by analysis of adversary campaigns and intrusion kill chains,” Leading Issues in Information Warfare & Security Research, vol. 1, no. 1, p. 80, 2011.\n[14] B. E. Strom, A. Applebaum, D. P. Miller, K. C. Nickels, A. G. Pennington, and C. B. Thomas, “MITRE ATT&CK™: Design and Philosophy,” The MITRE Corporation, Tech. Rep., 2018.\n[15] I. Mokube and M. Adams, “Honeypots: concepts, approaches, and challenges,” in\nProceedings of the 45th annual southeast regional conference, 2007, pp. 321–326.\n[16] “The Honeynet Project,” 1999. [Online]. Available: https://www.honeynet.org/\n[17] Y. Liu, M. Ott, N. Goyal, J. Du, M. Joshi, D. Chen, O. Levy, M. Lewis, L. Zettlemoyer, and\nV. Stoyanov, “Roberta: A robustly optimized bert pretraining approach,” arXiv preprint arXiv:1907.11692, 2019.\n[18] Z. Lan, M. Chen, S. Goodman, K. Gimpel, P. Sharma, and R. Soricut, “Albert: A lite bert for self-supervised learning of language representations,” arXiv preprint arXiv:1909.11942, 2019.\n[19] V. Sanh, L. Debut, J. Chaumond, and T. Wolf, “Distilbert, a distilled version of BERT: smaller, faster, cheaper and lighter,” arXiv preprint arXiv:1910.01108, 2019.\n[20] X. Jiao, Y. Yin, L. Shang, X. Jiang, X. Chen, L. Li, F. Wang, and Q. Liu, “Tinybert: Distilling bert for natural language understanding,” arXiv preprint arXiv:1909.10351, 2019.\n[21] J. Lee, W. Yoon, S. Kim, D. Kim, S. Kim, C. H. So, and J. Kang, “Biobert: a pre-trained biomedical language representation model for biomedical text mining,” Bioinformatics, vol. 36, no. 4, pp. 1234–1240, 2020.\n[22] K. Huang, J. Altosaar, and R. Ranganath, “Clinicalbert: Modeling clinical notes and predicting hospital readmission,” arXiv preprint arXiv:1904.05342, 2019.\n[23] I. Beltagy, K. Lo, and A. Cohan, “Scibert: A pretrained language model for scientific text,”\narXiv preprint arXiv:1903.10676, 2019.\n[24] “Common Attack Pattern Enumeration and Classification.” [Online]. Available: https://capec.mitre.org/index.html.\n[25] S. Barnum, “Standardizing cyber threat intelligence information with the structured threat information expression (stix),” Mitre Corporation, vol. 11, pp. 1–22, 2012.\n[26] S. Caltagirone, A. Pendergast, and C. Betz, “The diamond model of intrusion analysis,” Center For Cyber Intelligence Analysis and Threat Research Hanover Md, Tech. Rep., 2013.\n[27] R.-H. Hwang, M.-C. Peng, V.-L. Nguyen, and Y.-L. Chang, “An LSTM-based deep learning approach for classifying malicious traffic at the packet level,” Applied Sciences, vol. 9, no. 16, p. 3414, 2019.\n[28] Y. Yu, H. Yan, H. Guan, and H. Zhou, “DeepHTTP: semantics-structure model with attention for anomalous HTTP traffic detection and pattern mining,” arXiv preprint arXiv:1810.12751, 2018.\n[29] L. Han, Y. Sheng, and X. Zeng, “A packet-length-adjustable attention model based on bytes embedding using flow-WGAN for smart cybersecurity,” IEEE Access, vol. 7, pp. 82 913–82 926, 2019.\n[30] T. Mikolov, I. Sutskever, K. Chen, G. Corrado, and J. Dean, “Distributed representations of words and phrases and their compositionality,” arXiv preprint arXiv:1310.4546, 2013.\n[31] E. L. Goodman, C. Zimmerman, and C. Hudson, “Packet2Vec: Utilizing Word2Vec for Feature Extraction in Packet Data,” arXiv preprint arXiv:2004.14477, 2020.\n[32] F. Dehghani, N. Movahhedinia, M. R. Khayyambashi, and S. Kianian, “Real-time traffic classification based on statistical and payload content features,” in 2010 2nd international workshop on intelligent systems and applications. IEEE, 2010, pp. 1–4.\n[33] G. Betarte, Á. Pardo, and R. Martínez, “Web application attacks detection using machine learning techniques,” in 2018 17th ieee International Conference on Machine Learning and Applications (icmla). IEEE, 2018, pp. 1065–1072.\n[34] H. Liu, B. Lang, M. Liu, and H. Yan, “CNN and RNN based payload classification methods for attack detection,” Knowledge-Based Systems, vol. 163, pp. 332–341, 2019.\n[35] e. a. Falcon, WA, “Pytorch lightning,” GitHub. Note: https://github.com/PyTorchLightning/pytorch-lightning, vol. 3, 2019.\n[36] T. Wolf, L. Debut, V. Sanh, J. Chaumond, C. Delangue, A. Moi, P. Cistac, T. Rault,\nR. Louf, M. Funtowicz, J. Davison, S. Shleifer, P. von Platen, C. Ma, Y. Jernite, J. Plu,\nC. Xu, T. L. Scao, S. Gugger, M. Drame, Q. Lhoest, and A. M. Rush, “Transformers: State-of-the-art natural language processing,” in Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations. Online: Association for Computational Linguistics, Oct. 2020, pp. 38–45. [Online]. Available: https://www.aclweb.org/anthology/2020.emnlp-demos.6\n[37] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel,\nP. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher,\nM. Perrot, and E. Duchesnay, “Scikit-learn: Machine Learning in Python,” Journal of Machine Learning Research, vol. 12, pp. 2825–2830, 2011.\n[38] P. Qi, Y. Zhang, Y. Zhang, J. Bolton, and C. D. Manning, “Stanza: A Python Natural Language Processing Toolkit for Many Human Languages,” in Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, 2020.
描述: 碩士
國立政治大學
資訊管理學系
108356038
資料來源: http://thesis.lib.nccu.edu.tw/record/#G0108356038
資料類型: thesis
Appears in Collections:學位論文

Files in This Item:
File Description SizeFormat
603801.pdf7.4 MBAdobe PDF2View/Open
Show full item record

Google ScholarTM

Check

Altmetric

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.