學術產出-Theses

Article View/Open

Publication Export

Google ScholarTM

政大圖書館

Citation Infomation

  • No doi shows Citation Infomation
題名 針對Python 程式的神經網路模型作動態符號執行測試
Concolic Testing On Python Programs of Neural Network Models
作者 紀亞妤
Chi, Ya-Yu
貢獻者 郁方
Yu, Fang
紀亞妤
Chi, Ya-Yu
關鍵詞 動態符號執行測試
對抗式生成攻擊
神經網路模型
Concolic Testing
Adversarial attack
Neural Network model
日期 2023
上傳時間 1-Feb-2024 10:57:09 (UTC+8)
摘要 近年來,人工智慧(AI)的迅速進步在各個領域取得了重大突破, 特別是在神經網路模型的應用方面。然而,AI 模型的廣泛應用引起了 對其對抗式攻擊易受攻擊性的擔憂。本研究聚焦於採用動態符號執行 測試,一種專門為實現神經網路的Python 程式設計的具體與符號執行 結合的專業程式測試技術。本研究擴展了PyCT,一個針對Python 程 式的基於約束的動態符號執行測試工具,以應對更廣泛的神經網路運 作,包括在ReLU、Maxpooling 和tanh、Sigmoid 等神經網路中的浮點 運算。其目標是系統性生成預測路徑約束並生成對應輸入,徹底探索 神經網路分支,有助於識別潛在的對抗性例子。這項研究證明了,在 Python 程式中的神經網路架構中,這種方法能夠生成各種有影響力的 對抗性例子的有效性。透過凸顯神經網路模型在Python 程式環境中對 對抗式攻擊的易受攻擊性,有助於維護AI 驅動應用的穩定性。同時, 強調了檢測和緩解潛在對抗威脅的強大測試方法的必要性,促進了在 Python 程式在更廣泛背景下開發更安全可靠的AI 模型的發展。同時, 也強調了強化神經網路模型的嚴謹測試技術的重要性,以確保其在由 Python 支持的多樣應用中的可靠性。
In the era of rapid advancements in artificial intelligence (AI), neural network models have achieved notable breakthroughs. However, concerns arise regarding their vulnerability to adversarial attacks. This study focuses on enhancing Concolic Testing, a specialized technique for testing Python programs implementing neural networks. The extended tool, PyCT, now accommodates a broader range of neural network operations, including floating-point computations. By systematically generating prediction path constraints, the research facilitates the identification of potential adversarial examples. Demonstrating effectiveness across various neural network architectures, the study highlights the vulnerability of Python-based neural network models to adversarial attacks. This research contributes to securing AI-powered applications by emphasizing the need for robust testing methodologies to detect and mitigate potential adversarial threats. It underscores the importance of rigorous testing techniques in fortifying neural network models for reliable applications in Python.
參考文獻 [1] Pyct-rq: Constraint-based concolic testing for neural networks. https://github.com/ManticoreDai/PyCT-rq. [2] S. Cha, S. Hong, J. Bak, J. Kim, J. Lee, and H. Oh. Enhancing dynamic symbolic execution by automatically learning search heuristics. IEEE Transactions on Software Engineering, 48(9):3640–3663, 2021. [3] H. Chen, S. M. Lundberg, and S.-I. Lee. Explaining a series of models by propagating shapley values. Nature communications, 13(1):4512, 2022. [4] Y.-F. Chen, W.-L. Tsai, W.-C. Wu, D.-D. Yen, and F. Yu. Pyct: A python concolic tester. In Programming Languages and Systems: 19th Asian Symposium, APLAS 2021, Chicago, IL, USA, October 17–18, 2021, Proceedings 19, pages 38–46. Springer, 2021. [5] S. Fortz, F. Mesnard, E. Payet, G. Perrouin, W. Vanhoof, and G. Vidal. An smt-based concolic testing tool for logic programs. In International Symposium on Functional and Logic Programming, pages 215–219. Springer, 2020. [6] I. Fursov, A. Zaytsev, N. Kluchnikov, A. Kravchenko, and E. Burnaev. Gradientbased adversarial attacks on categorical sequence models via traversing an embedded world. In Analysis of Images, Social Networks and Texts: 9th International Conference, AIST 2020, Skolkovo, Moscow, Russia, October 15–16, 2020, Revised Selected Papers 9, pages 356–368. Springer, 2021. [7] I. Goodfellow, Y. Bengio, and A. Courville. Deep learning. MIT press, 2016. [8] I. J. Goodfellow, J. Shlens, and C. Szegedy. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572, 2014. [9] G. Hinton, L. Deng, D. Yu, G. E. Dahl, A.-r. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, T. N. Sainath, et al. Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE Signal processing magazine, 29(6):82–97, 2012. [10] W. Huang, Y. Sun, X. Zhao, J. Sharp, W. Ruan, J. Meng, and X. Huang. Coverageguided testing for recurrent neural networks. IEEE Transactions on Reliability, 71(3):1191–1206, 2021. [11] G. Katz, C. Barrett, D. L. Dill, K. Julian, and M. J. Kochenderfer. Reluplex: An efficient smt solver for verifying deep neural networks. In Computer Aided Verification: 29th International Conference, CAV 2017, Heidelberg, Germany, July 24-28, 2017, Proceedings, Part I 30, pages 97–117. Springer, 2017. [12] G. Katz, D. A. Huang, D. Ibeling, K. Julian, C. Lazarus, R. Lim, P. Shah, S. Thakoor, H. Wu, A. Zeljić, et al. The marabou framework for verification and analysis of deep neural networks. In Computer Aided Verification: 31st International Conference, CAV 2019, New York City, NY, USA, July 15-18, 2019, Proceedings, Part I 31, pages 443–452. Springer, 2019. [13] Y. Kim, S. Hong, and M. Kim. Target-driven compositional concolic testing with function summary refinement for effective bug detection. In Proceedings of the 2019 27th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering, pages 16–26, 2019. [14] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems, 25, 2012. [15] Z. Li, X. Ma, C. Xu, and C. Cao. Structural coverage criteria for neural networks could be misleading. In 2019 IEEE/ACM 41st International Conference on Software Engineering: New Ideas and Emerging Results (ICSE-NIER), pages 89–92. IEEE, 2019. [16] L. Ma, F. Juefei-Xu, F. Zhang, J. Sun, M. Xue, B. Li, C. Chen, T. Su, L. Li, Y. Liu, et al. Deepgauge: Multi-granularity testing criteria for deep learning systems. In Proceedings of the 33rd ACM/IEEE international conference on automated software engineering, pages 120–131, 2018. [17] L. Ma, F. Zhang, J. Sun, M. Xue, B. Li, F. Juefei-Xu, C. Xie, L. Li, Y. Liu, J. Zhao, et al. Deepmutation: Mutation testing of deep learning systems. In 2018 IEEE 29th international symposium on software reliability engineering (ISSRE), pages 100–111. IEEE, 2018. [18] X. Meng, S. Kundu, A. K. Kanuparthi, and K. Basu. Rtl-contest: Concolic testing on rtl for detecting security vulnerabilities. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 41(3):466–477, 2021. [19] S.-M. Moosavi-Dezfooli, A. Fawzi, O. Fawzi, and P. Frossard. Universal adversarial perturbations. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1765–1773, 2017. [20] S.-M. Moosavi-Dezfooli, A. Fawzi, and P. Frossard. Deepfool: a simple and accurate method to fool deep neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2574–2582, 2016. [21] A. Nguyen, J. Yosinski, and J. Clune. Deep neural networks are easily fooled: High confidence predictions for unrecognizable images. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 427–436, 2015. [22] K. Pei, Y. Cao, J. Yang, and S. Jana. Deepxplore: Automated whitebox testing of deep learning systems. In proceedings of the 26th Symposium on Operating Systems Principles, pages 1–18, 2017. [23] K. Sen. Concolic testing. In Proceedings of the 22nd IEEE/ACM international conference on Automated software engineering, pages 571–572, 2007. [24] D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. Van Den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, et al. Mastering the game of go with deep neural networks and tree search. nature, 529(7587):484–489, 2016. [25] Y. Sun, X. Huang, D. Kroening, J. Sharp, M. Hill, and R. Ashmore. Deepconcolic: Testing and debugging deep neural networks. In 2019 IEEE/ACM 41st International Conference on Software Engineering: Companion Proceedings (ICSE-Companion), pages 111–114. IEEE, 2019. [26] Y. Sun, M. Wu, W. Ruan, X. Huang, M. Kwiatkowska, and D. Kroening. Concolic testing for deep neural networks. In Proceedings of the 33rd ACM/IEEE International Conference on Automated Software Engineering, pages 109–119, 2018. [27] C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199, 2013. [28] X. Xie, T. Li, J. Wang, L. Ma, Q. Guo, F. Juefei-Xu, and Y. Liu. Npc: N euron p ath c overage via characterizing decision logic of deep neural networks. ACM Transactions on Software Engineering and Methodology (TOSEM), 31(3):1–27, 2022. [29] X. Xie, L. Ma, F. Juefei-Xu, M. Xue, H. Chen, Y. Liu, J. Zhao, B. Li, J. Yin, and S. See. Deephunter: a coverage-guided fuzz testing framework for deep neural networks. In Proceedings of the 28th ACM SIGSOFT International Symposium on Software Testing and Analysis, pages 146–157, 2019. [30] X. Xu, J. Chen, J. Xiao, Z. Wang, Y. Yang, and H. T. Shen. Learning optimization-based adversarial perturbations for attacking sequential recognition models. In Proceedings of the 28th ACM international conference on multimedia, pages 2802–2822, 2020. [31] Z. Zhou, W. Dou, J. Liu, C. Zhang, J. Wei, and D. Ye. Deepcon: Contribution coverage testing for deep learning systems. In 2021 IEEE International Conference on Software Analysis, Evolution and Reengineering (SANER), pages 189–200. IEEE, 2021.
描述 碩士
國立政治大學
資訊管理學系
110356043
資料來源 http://thesis.lib.nccu.edu.tw/record/#G0110356043
資料類型 thesis
dc.contributor.advisor 郁方zh_TW
dc.contributor.advisor Yu, Fangen_US
dc.contributor.author (Authors) 紀亞妤zh_TW
dc.contributor.author (Authors) Chi, Ya-Yuen_US
dc.creator (作者) 紀亞妤zh_TW
dc.creator (作者) Chi, Ya-Yuen_US
dc.date (日期) 2023en_US
dc.date.accessioned 1-Feb-2024 10:57:09 (UTC+8)-
dc.date.available 1-Feb-2024 10:57:09 (UTC+8)-
dc.date.issued (上傳時間) 1-Feb-2024 10:57:09 (UTC+8)-
dc.identifier (Other Identifiers) G0110356043en_US
dc.identifier.uri (URI) https://nccur.lib.nccu.edu.tw/handle/140.119/149471-
dc.description (描述) 碩士zh_TW
dc.description (描述) 國立政治大學zh_TW
dc.description (描述) 資訊管理學系zh_TW
dc.description (描述) 110356043zh_TW
dc.description.abstract (摘要) 近年來,人工智慧(AI)的迅速進步在各個領域取得了重大突破, 特別是在神經網路模型的應用方面。然而,AI 模型的廣泛應用引起了 對其對抗式攻擊易受攻擊性的擔憂。本研究聚焦於採用動態符號執行 測試,一種專門為實現神經網路的Python 程式設計的具體與符號執行 結合的專業程式測試技術。本研究擴展了PyCT,一個針對Python 程 式的基於約束的動態符號執行測試工具,以應對更廣泛的神經網路運 作,包括在ReLU、Maxpooling 和tanh、Sigmoid 等神經網路中的浮點 運算。其目標是系統性生成預測路徑約束並生成對應輸入,徹底探索 神經網路分支,有助於識別潛在的對抗性例子。這項研究證明了,在 Python 程式中的神經網路架構中,這種方法能夠生成各種有影響力的 對抗性例子的有效性。透過凸顯神經網路模型在Python 程式環境中對 對抗式攻擊的易受攻擊性,有助於維護AI 驅動應用的穩定性。同時, 強調了檢測和緩解潛在對抗威脅的強大測試方法的必要性,促進了在 Python 程式在更廣泛背景下開發更安全可靠的AI 模型的發展。同時, 也強調了強化神經網路模型的嚴謹測試技術的重要性,以確保其在由 Python 支持的多樣應用中的可靠性。zh_TW
dc.description.abstract (摘要) In the era of rapid advancements in artificial intelligence (AI), neural network models have achieved notable breakthroughs. However, concerns arise regarding their vulnerability to adversarial attacks. This study focuses on enhancing Concolic Testing, a specialized technique for testing Python programs implementing neural networks. The extended tool, PyCT, now accommodates a broader range of neural network operations, including floating-point computations. By systematically generating prediction path constraints, the research facilitates the identification of potential adversarial examples. Demonstrating effectiveness across various neural network architectures, the study highlights the vulnerability of Python-based neural network models to adversarial attacks. This research contributes to securing AI-powered applications by emphasizing the need for robust testing methodologies to detect and mitigate potential adversarial threats. It underscores the importance of rigorous testing techniques in fortifying neural network models for reliable applications in Python.en_US
dc.description.tableofcontents 1 Introduction 1 2 Related Work 3 2.1 Concolic Testing 3 2.2 Coverage-guided Testing on Neural Network 4 2.3 Adversarial Attack 7 3 A Running Example on RNN Model 9 4 Methodology 14 4.1 PyCT 14 4.2 The Process of Attacking DNN 15 4.2.1 Concolic Variable Selection Policies 16 4.2.2 Concolic Execution 17 4.2.3 Adversarial Attack Synthesis 18 4.3 The Implementation of DNN Layers 19 4.3.1 Activation Layer 19 4.3.2 Convolutional Layer 21 4.3.3 Pooling Layer 23 4.3.4 Recurrent Neural Network (RNN) 25 4.3.5 Long Short-Term Memory (LSTM) 26 4.3.6 The Nature Exponential Function 29 5 Experiments 32 5.1 Dataset 32 5.1.1 Mnist Dataset 32 5.1.2 Trading Strategy Dataset 33 5.1.3 Internet Movie Database (IMDb) Dataset 33 5.2 Model 34 5.2.1 Simple CNN Model(CNN_678) 34 5.2.2 Complex CNN Model(CNN_2418) 34 5.2.3 SimpleRNN Model(RNN) 35 5.2.4 LSTM Model(LSTM) 36 5.2.5 Trading Strategy LSTM Model(Stock) 36 5.2.6 IMDb LSTM Model(IMDb) 36 5.3 Experimental Setup 37 5.4 Constraints on NN Model(RQ1) 38 5.5 Comparison between Stack and Queue(RQ2) 43 5.6 Influence using SHAP Values(RQ3) 44 5.7 Constraints-solving Performance(RQ4) 45 5.8 Comparison with DeepConcolic(RQ5) 46 6 Conclusions 50 Reference 51zh_TW
dc.format.extent 4467534 bytes-
dc.format.mimetype application/pdf-
dc.source.uri (資料來源) http://thesis.lib.nccu.edu.tw/record/#G0110356043en_US
dc.subject (關鍵詞) 動態符號執行測試zh_TW
dc.subject (關鍵詞) 對抗式生成攻擊zh_TW
dc.subject (關鍵詞) 神經網路模型zh_TW
dc.subject (關鍵詞) Concolic Testingen_US
dc.subject (關鍵詞) Adversarial attacken_US
dc.subject (關鍵詞) Neural Network modelen_US
dc.title (題名) 針對Python 程式的神經網路模型作動態符號執行測試zh_TW
dc.title (題名) Concolic Testing On Python Programs of Neural Network Modelsen_US
dc.type (資料類型) thesisen_US
dc.relation.reference (參考文獻) [1] Pyct-rq: Constraint-based concolic testing for neural networks. https://github.com/ManticoreDai/PyCT-rq. [2] S. Cha, S. Hong, J. Bak, J. Kim, J. Lee, and H. Oh. Enhancing dynamic symbolic execution by automatically learning search heuristics. IEEE Transactions on Software Engineering, 48(9):3640–3663, 2021. [3] H. Chen, S. M. Lundberg, and S.-I. Lee. Explaining a series of models by propagating shapley values. Nature communications, 13(1):4512, 2022. [4] Y.-F. Chen, W.-L. Tsai, W.-C. Wu, D.-D. Yen, and F. Yu. Pyct: A python concolic tester. In Programming Languages and Systems: 19th Asian Symposium, APLAS 2021, Chicago, IL, USA, October 17–18, 2021, Proceedings 19, pages 38–46. Springer, 2021. [5] S. Fortz, F. Mesnard, E. Payet, G. Perrouin, W. Vanhoof, and G. Vidal. An smt-based concolic testing tool for logic programs. In International Symposium on Functional and Logic Programming, pages 215–219. Springer, 2020. [6] I. Fursov, A. Zaytsev, N. Kluchnikov, A. Kravchenko, and E. Burnaev. Gradientbased adversarial attacks on categorical sequence models via traversing an embedded world. In Analysis of Images, Social Networks and Texts: 9th International Conference, AIST 2020, Skolkovo, Moscow, Russia, October 15–16, 2020, Revised Selected Papers 9, pages 356–368. Springer, 2021. [7] I. Goodfellow, Y. Bengio, and A. Courville. Deep learning. MIT press, 2016. [8] I. J. Goodfellow, J. Shlens, and C. Szegedy. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572, 2014. [9] G. Hinton, L. Deng, D. Yu, G. E. Dahl, A.-r. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, T. N. Sainath, et al. Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE Signal processing magazine, 29(6):82–97, 2012. [10] W. Huang, Y. Sun, X. Zhao, J. Sharp, W. Ruan, J. Meng, and X. Huang. Coverageguided testing for recurrent neural networks. IEEE Transactions on Reliability, 71(3):1191–1206, 2021. [11] G. Katz, C. Barrett, D. L. Dill, K. Julian, and M. J. Kochenderfer. Reluplex: An efficient smt solver for verifying deep neural networks. In Computer Aided Verification: 29th International Conference, CAV 2017, Heidelberg, Germany, July 24-28, 2017, Proceedings, Part I 30, pages 97–117. Springer, 2017. [12] G. Katz, D. A. Huang, D. Ibeling, K. Julian, C. Lazarus, R. Lim, P. Shah, S. Thakoor, H. Wu, A. Zeljić, et al. The marabou framework for verification and analysis of deep neural networks. In Computer Aided Verification: 31st International Conference, CAV 2019, New York City, NY, USA, July 15-18, 2019, Proceedings, Part I 31, pages 443–452. Springer, 2019. [13] Y. Kim, S. Hong, and M. Kim. Target-driven compositional concolic testing with function summary refinement for effective bug detection. In Proceedings of the 2019 27th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering, pages 16–26, 2019. [14] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems, 25, 2012. [15] Z. Li, X. Ma, C. Xu, and C. Cao. Structural coverage criteria for neural networks could be misleading. In 2019 IEEE/ACM 41st International Conference on Software Engineering: New Ideas and Emerging Results (ICSE-NIER), pages 89–92. IEEE, 2019. [16] L. Ma, F. Juefei-Xu, F. Zhang, J. Sun, M. Xue, B. Li, C. Chen, T. Su, L. Li, Y. Liu, et al. Deepgauge: Multi-granularity testing criteria for deep learning systems. In Proceedings of the 33rd ACM/IEEE international conference on automated software engineering, pages 120–131, 2018. [17] L. Ma, F. Zhang, J. Sun, M. Xue, B. Li, F. Juefei-Xu, C. Xie, L. Li, Y. Liu, J. Zhao, et al. Deepmutation: Mutation testing of deep learning systems. In 2018 IEEE 29th international symposium on software reliability engineering (ISSRE), pages 100–111. IEEE, 2018. [18] X. Meng, S. Kundu, A. K. Kanuparthi, and K. Basu. Rtl-contest: Concolic testing on rtl for detecting security vulnerabilities. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 41(3):466–477, 2021. [19] S.-M. Moosavi-Dezfooli, A. Fawzi, O. Fawzi, and P. Frossard. Universal adversarial perturbations. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1765–1773, 2017. [20] S.-M. Moosavi-Dezfooli, A. Fawzi, and P. Frossard. Deepfool: a simple and accurate method to fool deep neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2574–2582, 2016. [21] A. Nguyen, J. Yosinski, and J. Clune. Deep neural networks are easily fooled: High confidence predictions for unrecognizable images. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 427–436, 2015. [22] K. Pei, Y. Cao, J. Yang, and S. Jana. Deepxplore: Automated whitebox testing of deep learning systems. In proceedings of the 26th Symposium on Operating Systems Principles, pages 1–18, 2017. [23] K. Sen. Concolic testing. In Proceedings of the 22nd IEEE/ACM international conference on Automated software engineering, pages 571–572, 2007. [24] D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. Van Den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, et al. Mastering the game of go with deep neural networks and tree search. nature, 529(7587):484–489, 2016. [25] Y. Sun, X. Huang, D. Kroening, J. Sharp, M. Hill, and R. Ashmore. Deepconcolic: Testing and debugging deep neural networks. In 2019 IEEE/ACM 41st International Conference on Software Engineering: Companion Proceedings (ICSE-Companion), pages 111–114. IEEE, 2019. [26] Y. Sun, M. Wu, W. Ruan, X. Huang, M. Kwiatkowska, and D. Kroening. Concolic testing for deep neural networks. In Proceedings of the 33rd ACM/IEEE International Conference on Automated Software Engineering, pages 109–119, 2018. [27] C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199, 2013. [28] X. Xie, T. Li, J. Wang, L. Ma, Q. Guo, F. Juefei-Xu, and Y. Liu. Npc: N euron p ath c overage via characterizing decision logic of deep neural networks. ACM Transactions on Software Engineering and Methodology (TOSEM), 31(3):1–27, 2022. [29] X. Xie, L. Ma, F. Juefei-Xu, M. Xue, H. Chen, Y. Liu, J. Zhao, B. Li, J. Yin, and S. See. Deephunter: a coverage-guided fuzz testing framework for deep neural networks. In Proceedings of the 28th ACM SIGSOFT International Symposium on Software Testing and Analysis, pages 146–157, 2019. [30] X. Xu, J. Chen, J. Xiao, Z. Wang, Y. Yang, and H. T. Shen. Learning optimization-based adversarial perturbations for attacking sequential recognition models. In Proceedings of the 28th ACM international conference on multimedia, pages 2802–2822, 2020. [31] Z. Zhou, W. Dou, J. Liu, C. Zhang, J. Wei, and D. Ye. Deepcon: Contribution coverage testing for deep learning systems. In 2021 IEEE International Conference on Software Analysis, Evolution and Reengineering (SANER), pages 189–200. IEEE, 2021.zh_TW