Publications-Theses
Article View/Open
Publication Export
-
題名 基於Transformer語言模型之自動化ATT&CK戰術識別
Automatic ATT&CK Tactics Identification by Transformer-Based Language Model作者 林伶軒
Lin, Ling-Hsuan貢獻者 蕭舜文
Hsiao, Shun-Wen
林伶軒
Lin, Ling-Hsuan關鍵詞 封包分析
語言模型
多標籤分類
網路威脅情報
網路安全
MITRE ATT&CK
Transformers
Multi-label classification
Threat intelligence
Cybersecurity日期 2021 上傳時間 1-Apr-2022 15:01:32 (UTC+8) 摘要 隨著資安攻擊和數據洩露的迅速增加,資訊安全已成為全球關注的重要問題。人工智慧可以幫助人類自動分析攻擊,特別是分析攻擊意圖以生成威脅情報。基此,本研究旨在透過人工智慧模型自動地識別封包的攻擊意圖。我們提出一個基於 Transformer 的語言模型,藉由分析 MITRE 網站上的文章來學習戰術(意圖)和攻擊封包之間的關係。該模型嵌入一個封包並輸出一個表示封包內容及其意圖的高維向量。本研究亦建立一套標籤數據集生成流程,使用無監督學習方法生成用於訓練語言模型的標籤數據,有效減輕人工標記大數據資料集的負擔。實驗結果顯示,本研究微調的多標籤分類語言模型在識別封包攻擊戰術的 F1 分數為 1。
Cybersecurity has become a primary global concern with the rapid increase in security attacks and data breaches. Artificial intelligence can help humans analyze attacks, specifically to generate threat intelligence. This study aims to automatically identify the intention of attack packets through an artificial intelligence model. We propose a Transformer-based language model that learns the relationship between tactics (intentions) and attack packets by analyzing the articles on the MITRE website. The model embeds a packet and outputs a high-dimensional vector representing packet content and its intent (if any). This study also establishes a label dataset generation process by using an unsupervised learning method to generate label data for training language models, effectively reducing the burden of manually labeling big data datasets. The experimental results show that the multi-label classification language model fine-tuned in this study has an F1 score of 1 for identifying packet attack tactics.參考文獻 [1] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, u. Kaiser, and I. Polosukhin, “Attention is All You Need,” in Proceedings of the 31st International Conference on Neural Information Processing Systems, 2017.[2] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “Bert: Pre-training of deep bidirectional transformers for language understanding,” arXiv preprint arXiv:1810.04805, 2018.[3] B. Binde, R. McRee, and T. J. O’Connor, “Assessing outbound traffic to uncover advanced persistent threat,” SANS Institute. Whitepaper, vol. 16, 2011.[4] S. Morgan, “2021 Report: Cyberwarfare in the C-Suite,” Cybersecurity Ventures, Tech. Rep., January 2021.[5] “MITRE ATT&CK,” 2021. [Online]. Available: https://attack.mitre.org/.[6] R. McMillan, “Definition: Threat Intelligence,” 2013. [Online]. Available: https://www.gartner.com/en/documents/2487216[7] G. Husari, E. Al-Shaer, M. Ahmed, B. Chu, and X. Niu, “Ttpdrill: Automatic and accurate extraction of threat actions from unstructured text of cti sources,” in Proceedings of the 33rd Annual Computer Security Applications Conference, 2017, pp. 103–115.[8] G. Ayoade, S. Chandra, L. Khan, K. Hamlen, and B. Thuraisingham, “Automated threat report classification over multi-source data,” in 2018 IEEE 4th International Conference on Collaboration and Internet Computing (CIC). IEEE, 2018, pp. 236–245.[9] T. T. Thein, Y. Ezawa, S. Nakagawa, K. Furumoto, Y. Shiraishi, M. Mohri, Y. Takano, andM. Morii, “Paragraph-based Estimation of Cyber Kill Chain Phase from Threat Intelligence Reports,” Journal of Information Processing, vol. 28, pp. 1025–1029, 2020.[10] V. Legoy, M. Caselli, C. Seifert, and A. Peter, “Automated Retrieval of ATT&CK Tactics and Techniques for Cyber Threat Reports,” arXiv preprint arXiv:2004.14322, 2020.[11] “Wireshark · Go Deep.” 2021. [Online]. Available: https://www.wireshark.org/.[12] “Snort - Network Intrusion Detection & Prevention System,” 2021. [Online]. Available: https://snort.org/.[13] E. M. Hutchins, M. J. Cloppert, R. M. Amin et al., “Intelligence-driven computer network defense informed by analysis of adversary campaigns and intrusion kill chains,” Leading Issues in Information Warfare & Security Research, vol. 1, no. 1, p. 80, 2011.[14] B. E. Strom, A. Applebaum, D. P. Miller, K. C. Nickels, A. G. Pennington, and C. B. Thomas, “MITRE ATT&CK™: Design and Philosophy,” The MITRE Corporation, Tech. Rep., 2018.[15] I. Mokube and M. Adams, “Honeypots: concepts, approaches, and challenges,” inProceedings of the 45th annual southeast regional conference, 2007, pp. 321–326.[16] “The Honeynet Project,” 1999. [Online]. Available: https://www.honeynet.org/[17] Y. Liu, M. Ott, N. Goyal, J. Du, M. Joshi, D. Chen, O. Levy, M. Lewis, L. Zettlemoyer, andV. Stoyanov, “Roberta: A robustly optimized bert pretraining approach,” arXiv preprint arXiv:1907.11692, 2019.[18] Z. Lan, M. Chen, S. Goodman, K. Gimpel, P. Sharma, and R. Soricut, “Albert: A lite bert for self-supervised learning of language representations,” arXiv preprint arXiv:1909.11942, 2019.[19] V. Sanh, L. Debut, J. Chaumond, and T. Wolf, “Distilbert, a distilled version of BERT: smaller, faster, cheaper and lighter,” arXiv preprint arXiv:1910.01108, 2019.[20] X. Jiao, Y. Yin, L. Shang, X. Jiang, X. Chen, L. Li, F. Wang, and Q. Liu, “Tinybert: Distilling bert for natural language understanding,” arXiv preprint arXiv:1909.10351, 2019.[21] J. Lee, W. Yoon, S. Kim, D. Kim, S. Kim, C. H. So, and J. Kang, “Biobert: a pre-trained biomedical language representation model for biomedical text mining,” Bioinformatics, vol. 36, no. 4, pp. 1234–1240, 2020.[22] K. Huang, J. Altosaar, and R. Ranganath, “Clinicalbert: Modeling clinical notes and predicting hospital readmission,” arXiv preprint arXiv:1904.05342, 2019.[23] I. Beltagy, K. Lo, and A. Cohan, “Scibert: A pretrained language model for scientific text,”arXiv preprint arXiv:1903.10676, 2019.[24] “Common Attack Pattern Enumeration and Classification.” [Online]. Available: https://capec.mitre.org/index.html.[25] S. Barnum, “Standardizing cyber threat intelligence information with the structured threat information expression (stix),” Mitre Corporation, vol. 11, pp. 1–22, 2012.[26] S. Caltagirone, A. Pendergast, and C. Betz, “The diamond model of intrusion analysis,” Center For Cyber Intelligence Analysis and Threat Research Hanover Md, Tech. Rep., 2013.[27] R.-H. Hwang, M.-C. Peng, V.-L. Nguyen, and Y.-L. Chang, “An LSTM-based deep learning approach for classifying malicious traffic at the packet level,” Applied Sciences, vol. 9, no. 16, p. 3414, 2019.[28] Y. Yu, H. Yan, H. Guan, and H. Zhou, “DeepHTTP: semantics-structure model with attention for anomalous HTTP traffic detection and pattern mining,” arXiv preprint arXiv:1810.12751, 2018.[29] L. Han, Y. Sheng, and X. Zeng, “A packet-length-adjustable attention model based on bytes embedding using flow-WGAN for smart cybersecurity,” IEEE Access, vol. 7, pp. 82 913–82 926, 2019.[30] T. Mikolov, I. Sutskever, K. Chen, G. Corrado, and J. Dean, “Distributed representations of words and phrases and their compositionality,” arXiv preprint arXiv:1310.4546, 2013.[31] E. L. Goodman, C. Zimmerman, and C. Hudson, “Packet2Vec: Utilizing Word2Vec for Feature Extraction in Packet Data,” arXiv preprint arXiv:2004.14477, 2020.[32] F. Dehghani, N. Movahhedinia, M. R. Khayyambashi, and S. Kianian, “Real-time traffic classification based on statistical and payload content features,” in 2010 2nd international workshop on intelligent systems and applications. IEEE, 2010, pp. 1–4.[33] G. Betarte, Á. Pardo, and R. Martínez, “Web application attacks detection using machine learning techniques,” in 2018 17th ieee International Conference on Machine Learning and Applications (icmla). IEEE, 2018, pp. 1065–1072.[34] H. Liu, B. Lang, M. Liu, and H. Yan, “CNN and RNN based payload classification methods for attack detection,” Knowledge-Based Systems, vol. 163, pp. 332–341, 2019.[35] e. a. Falcon, WA, “Pytorch lightning,” GitHub. Note: https://github.com/PyTorchLightning/pytorch-lightning, vol. 3, 2019.[36] T. Wolf, L. Debut, V. Sanh, J. Chaumond, C. Delangue, A. Moi, P. Cistac, T. Rault,R. Louf, M. Funtowicz, J. Davison, S. Shleifer, P. von Platen, C. Ma, Y. Jernite, J. Plu,C. Xu, T. L. Scao, S. Gugger, M. Drame, Q. Lhoest, and A. M. Rush, “Transformers: State-of-the-art natural language processing,” in Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations. Online: Association for Computational Linguistics, Oct. 2020, pp. 38–45. [Online]. Available: https://www.aclweb.org/anthology/2020.emnlp-demos.6[37] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel,P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher,M. Perrot, and E. Duchesnay, “Scikit-learn: Machine Learning in Python,” Journal of Machine Learning Research, vol. 12, pp. 2825–2830, 2011.[38] P. Qi, Y. Zhang, Y. Zhang, J. Bolton, and C. D. Manning, “Stanza: A Python Natural Language Processing Toolkit for Many Human Languages,” in Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, 2020. 描述 碩士
國立政治大學
資訊管理學系
108356038資料來源 http://thesis.lib.nccu.edu.tw/record/#G0108356038 資料類型 thesis dc.contributor.advisor 蕭舜文 zh_TW dc.contributor.advisor Hsiao, Shun-Wen en_US dc.contributor.author (Authors) 林伶軒 zh_TW dc.contributor.author (Authors) Lin, Ling-Hsuan en_US dc.creator (作者) 林伶軒 zh_TW dc.creator (作者) Lin, Ling-Hsuan en_US dc.date (日期) 2021 en_US dc.date.accessioned 1-Apr-2022 15:01:32 (UTC+8) - dc.date.available 1-Apr-2022 15:01:32 (UTC+8) - dc.date.issued (上傳時間) 1-Apr-2022 15:01:32 (UTC+8) - dc.identifier (Other Identifiers) G0108356038 en_US dc.identifier.uri (URI) http://nccur.lib.nccu.edu.tw/handle/140.119/139546 - dc.description (描述) 碩士 zh_TW dc.description (描述) 國立政治大學 zh_TW dc.description (描述) 資訊管理學系 zh_TW dc.description (描述) 108356038 zh_TW dc.description.abstract (摘要) 隨著資安攻擊和數據洩露的迅速增加,資訊安全已成為全球關注的重要問題。人工智慧可以幫助人類自動分析攻擊,特別是分析攻擊意圖以生成威脅情報。基此,本研究旨在透過人工智慧模型自動地識別封包的攻擊意圖。我們提出一個基於 Transformer 的語言模型,藉由分析 MITRE 網站上的文章來學習戰術(意圖)和攻擊封包之間的關係。該模型嵌入一個封包並輸出一個表示封包內容及其意圖的高維向量。本研究亦建立一套標籤數據集生成流程,使用無監督學習方法生成用於訓練語言模型的標籤數據,有效減輕人工標記大數據資料集的負擔。實驗結果顯示,本研究微調的多標籤分類語言模型在識別封包攻擊戰術的 F1 分數為 1。 zh_TW dc.description.abstract (摘要) Cybersecurity has become a primary global concern with the rapid increase in security attacks and data breaches. Artificial intelligence can help humans analyze attacks, specifically to generate threat intelligence. This study aims to automatically identify the intention of attack packets through an artificial intelligence model. We propose a Transformer-based language model that learns the relationship between tactics (intentions) and attack packets by analyzing the articles on the MITRE website. The model embeds a packet and outputs a high-dimensional vector representing packet content and its intent (if any). This study also establishes a label dataset generation process by using an unsupervised learning method to generate label data for training language models, effectively reducing the burden of manually labeling big data datasets. The experimental results show that the multi-label classification language model fine-tuned in this study has an F1 score of 1 for identifying packet attack tactics. en_US dc.description.tableofcontents 1 Introduction 72 Related Work 102.1 Background 102.2 Literature Review 132.3 Comparison 173 PELAT 193.1 Overview 193.2 A Language Model for Attack Lifecycle Knowledge 193.3 Packet Parsing 203.4 TTPs-Labeled Data Generation 213.5 Semi-supervised Labeling 233.6 Packet to TTPs Learning 234 Implementation 264.1 Dataset 264.2 Environment 265 Evaluation 285.1 CTI to TTPs Learning 285.2 Payload Embedding 305.3 Signature 315.4 Semi-supervised Learning 365.5 Packet to TTPs Learning 365.6 Inference on Packets 386 Discussion 416.1 Findings 416.2 Limitations 416.3 Future Works 427 Conclusion 43References 44 zh_TW dc.format.extent 7581898 bytes - dc.format.mimetype application/pdf - dc.source.uri (資料來源) http://thesis.lib.nccu.edu.tw/record/#G0108356038 en_US dc.subject (關鍵詞) 封包分析 zh_TW dc.subject (關鍵詞) 語言模型 zh_TW dc.subject (關鍵詞) 多標籤分類 zh_TW dc.subject (關鍵詞) 網路威脅情報 zh_TW dc.subject (關鍵詞) 網路安全 zh_TW dc.subject (關鍵詞) MITRE ATT&CK en_US dc.subject (關鍵詞) Transformers en_US dc.subject (關鍵詞) Multi-label classification en_US dc.subject (關鍵詞) Threat intelligence en_US dc.subject (關鍵詞) Cybersecurity en_US dc.title (題名) 基於Transformer語言模型之自動化ATT&CK戰術識別 zh_TW dc.title (題名) Automatic ATT&CK Tactics Identification by Transformer-Based Language Model en_US dc.type (資料類型) thesis en_US dc.relation.reference (參考文獻) [1] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, u. Kaiser, and I. Polosukhin, “Attention is All You Need,” in Proceedings of the 31st International Conference on Neural Information Processing Systems, 2017.[2] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “Bert: Pre-training of deep bidirectional transformers for language understanding,” arXiv preprint arXiv:1810.04805, 2018.[3] B. Binde, R. McRee, and T. J. O’Connor, “Assessing outbound traffic to uncover advanced persistent threat,” SANS Institute. Whitepaper, vol. 16, 2011.[4] S. Morgan, “2021 Report: Cyberwarfare in the C-Suite,” Cybersecurity Ventures, Tech. Rep., January 2021.[5] “MITRE ATT&CK,” 2021. [Online]. Available: https://attack.mitre.org/.[6] R. McMillan, “Definition: Threat Intelligence,” 2013. [Online]. Available: https://www.gartner.com/en/documents/2487216[7] G. Husari, E. Al-Shaer, M. Ahmed, B. Chu, and X. Niu, “Ttpdrill: Automatic and accurate extraction of threat actions from unstructured text of cti sources,” in Proceedings of the 33rd Annual Computer Security Applications Conference, 2017, pp. 103–115.[8] G. Ayoade, S. Chandra, L. Khan, K. Hamlen, and B. Thuraisingham, “Automated threat report classification over multi-source data,” in 2018 IEEE 4th International Conference on Collaboration and Internet Computing (CIC). IEEE, 2018, pp. 236–245.[9] T. T. Thein, Y. Ezawa, S. Nakagawa, K. Furumoto, Y. Shiraishi, M. Mohri, Y. Takano, andM. Morii, “Paragraph-based Estimation of Cyber Kill Chain Phase from Threat Intelligence Reports,” Journal of Information Processing, vol. 28, pp. 1025–1029, 2020.[10] V. Legoy, M. Caselli, C. Seifert, and A. Peter, “Automated Retrieval of ATT&CK Tactics and Techniques for Cyber Threat Reports,” arXiv preprint arXiv:2004.14322, 2020.[11] “Wireshark · Go Deep.” 2021. [Online]. Available: https://www.wireshark.org/.[12] “Snort - Network Intrusion Detection & Prevention System,” 2021. [Online]. Available: https://snort.org/.[13] E. M. Hutchins, M. J. Cloppert, R. M. Amin et al., “Intelligence-driven computer network defense informed by analysis of adversary campaigns and intrusion kill chains,” Leading Issues in Information Warfare & Security Research, vol. 1, no. 1, p. 80, 2011.[14] B. E. Strom, A. Applebaum, D. P. Miller, K. C. Nickels, A. G. Pennington, and C. B. Thomas, “MITRE ATT&CK™: Design and Philosophy,” The MITRE Corporation, Tech. Rep., 2018.[15] I. Mokube and M. Adams, “Honeypots: concepts, approaches, and challenges,” inProceedings of the 45th annual southeast regional conference, 2007, pp. 321–326.[16] “The Honeynet Project,” 1999. [Online]. Available: https://www.honeynet.org/[17] Y. Liu, M. Ott, N. Goyal, J. Du, M. Joshi, D. Chen, O. Levy, M. Lewis, L. Zettlemoyer, andV. Stoyanov, “Roberta: A robustly optimized bert pretraining approach,” arXiv preprint arXiv:1907.11692, 2019.[18] Z. Lan, M. Chen, S. Goodman, K. Gimpel, P. Sharma, and R. Soricut, “Albert: A lite bert for self-supervised learning of language representations,” arXiv preprint arXiv:1909.11942, 2019.[19] V. Sanh, L. Debut, J. Chaumond, and T. Wolf, “Distilbert, a distilled version of BERT: smaller, faster, cheaper and lighter,” arXiv preprint arXiv:1910.01108, 2019.[20] X. Jiao, Y. Yin, L. Shang, X. Jiang, X. Chen, L. Li, F. Wang, and Q. Liu, “Tinybert: Distilling bert for natural language understanding,” arXiv preprint arXiv:1909.10351, 2019.[21] J. Lee, W. Yoon, S. Kim, D. Kim, S. Kim, C. H. So, and J. Kang, “Biobert: a pre-trained biomedical language representation model for biomedical text mining,” Bioinformatics, vol. 36, no. 4, pp. 1234–1240, 2020.[22] K. Huang, J. Altosaar, and R. Ranganath, “Clinicalbert: Modeling clinical notes and predicting hospital readmission,” arXiv preprint arXiv:1904.05342, 2019.[23] I. Beltagy, K. Lo, and A. Cohan, “Scibert: A pretrained language model for scientific text,”arXiv preprint arXiv:1903.10676, 2019.[24] “Common Attack Pattern Enumeration and Classification.” [Online]. Available: https://capec.mitre.org/index.html.[25] S. Barnum, “Standardizing cyber threat intelligence information with the structured threat information expression (stix),” Mitre Corporation, vol. 11, pp. 1–22, 2012.[26] S. Caltagirone, A. Pendergast, and C. Betz, “The diamond model of intrusion analysis,” Center For Cyber Intelligence Analysis and Threat Research Hanover Md, Tech. Rep., 2013.[27] R.-H. Hwang, M.-C. Peng, V.-L. Nguyen, and Y.-L. Chang, “An LSTM-based deep learning approach for classifying malicious traffic at the packet level,” Applied Sciences, vol. 9, no. 16, p. 3414, 2019.[28] Y. Yu, H. Yan, H. Guan, and H. Zhou, “DeepHTTP: semantics-structure model with attention for anomalous HTTP traffic detection and pattern mining,” arXiv preprint arXiv:1810.12751, 2018.[29] L. Han, Y. Sheng, and X. Zeng, “A packet-length-adjustable attention model based on bytes embedding using flow-WGAN for smart cybersecurity,” IEEE Access, vol. 7, pp. 82 913–82 926, 2019.[30] T. Mikolov, I. Sutskever, K. Chen, G. Corrado, and J. Dean, “Distributed representations of words and phrases and their compositionality,” arXiv preprint arXiv:1310.4546, 2013.[31] E. L. Goodman, C. Zimmerman, and C. Hudson, “Packet2Vec: Utilizing Word2Vec for Feature Extraction in Packet Data,” arXiv preprint arXiv:2004.14477, 2020.[32] F. Dehghani, N. Movahhedinia, M. R. Khayyambashi, and S. Kianian, “Real-time traffic classification based on statistical and payload content features,” in 2010 2nd international workshop on intelligent systems and applications. IEEE, 2010, pp. 1–4.[33] G. Betarte, Á. Pardo, and R. Martínez, “Web application attacks detection using machine learning techniques,” in 2018 17th ieee International Conference on Machine Learning and Applications (icmla). IEEE, 2018, pp. 1065–1072.[34] H. Liu, B. Lang, M. Liu, and H. Yan, “CNN and RNN based payload classification methods for attack detection,” Knowledge-Based Systems, vol. 163, pp. 332–341, 2019.[35] e. a. Falcon, WA, “Pytorch lightning,” GitHub. Note: https://github.com/PyTorchLightning/pytorch-lightning, vol. 3, 2019.[36] T. Wolf, L. Debut, V. Sanh, J. Chaumond, C. Delangue, A. Moi, P. Cistac, T. Rault,R. Louf, M. Funtowicz, J. Davison, S. Shleifer, P. von Platen, C. Ma, Y. Jernite, J. Plu,C. Xu, T. L. Scao, S. Gugger, M. Drame, Q. Lhoest, and A. M. Rush, “Transformers: State-of-the-art natural language processing,” in Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations. Online: Association for Computational Linguistics, Oct. 2020, pp. 38–45. [Online]. Available: https://www.aclweb.org/anthology/2020.emnlp-demos.6[37] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel,P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher,M. Perrot, and E. Duchesnay, “Scikit-learn: Machine Learning in Python,” Journal of Machine Learning Research, vol. 12, pp. 2825–2830, 2011.[38] P. Qi, Y. Zhang, Y. Zhang, J. Bolton, and C. D. Manning, “Stanza: A Python Natural Language Processing Toolkit for Many Human Languages,” in Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, 2020. zh_TW dc.identifier.doi (DOI) 10.6814/NCCU202200359 en_US