Publications-Theses

Article View/Open

Publication Export

Google ScholarTM

NCCU Library

Citation Infomation

Related Publications in TAIR

題名 基於影響指引的動態型式測試技術研究:以Transformer為例
Influence-Guided Concolic Testing on Transformer
作者 汪妤
Wang, Yu
貢獻者 郁方
Yu, Fang
汪妤
Wang, Yu
關鍵詞 自動單元測試
並發符號測試
動態並發符號測試
Python
Automatic unit testing
Concolic testing
Python
Dynamic concolication
日期 2024
上傳時間 4-Sep-2024 14:06:57 (UTC+8)
摘要 我們提出了一種新穎的影響引導並發符號測試方法,旨在生成關鍵測試輸入,以全面探索神經網絡的各種行為。該方法通過利用決策路徑的影響系統地檢查它們,從而優先考慮重要神經元的狀態變化,以優化並發符號測試性能。通過引導SMT求解器集中在最有可能發現漏洞的約束上,我們的方法在重要的決策路徑上進行並發符號測試,從而提高了網絡漏洞檢測的自動輸入生成效率。初步實驗表明,我們的方法在測試CNN和Transformer模型的對抗性示例方面取得了顯著進展。最後,我們提出了一種綜合關鍵決策路徑的方法,並識別出成功攻擊中共享的常見決策邏輯。
We present a novel influence-guided concolic testing methodology aimed at generating critical test inputs to comprehensively explore various neural network behaviors. This approach systematically examines decision paths by leveraging their influence, thereby prioritizing status changes of essential neurons to optimize concolic testing performance. By directing the SMT solver to concentrate on constraints most likely to uncover vulnerabilities, our method incorporates concolic testing on significant decision paths, thereby enhancing the efficiency of automatic input synthesis for network vulnerability detection. Preliminary experiments demonstrate the efficacy of our approach, showing substantial advancements in testing adversarial examples across both CNN and Transformer models. Finally, we propose a method for synthesizing critical decision paths and identify a common decision logic shared among successful attacks.
參考文獻 Alenezi, R. and Ludwig, S. A. (2021). Explainability of cybersecurity threats data using shap. In 2021 IEEE Symposium Series on Computational Intelligence (SSCI), pages 01–10. IEEE. Bach, S., Binder, A., Montavon, G., Klauschen, F., Müller, K.-R., and Samek, W. (2015). On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PloS one, 10(7):e0130140. Ball, T. and Daniel, J. (2015). Deconstructing dynamic symbolic execution. In Dependable Soft- ware Systems Engineering, pages 26–41. IOS Press. Brummayer, R. and Biere, A. (2009). Boolector: An efficient smt solver for bit-vectors and ar- rays. In Tools and Algorithms for the Construction and Analysis of Systems: 15th International Conference, TACAS 2009, Held as Part of the Joint European Conferences on Theory and Prac- tice of Software, ETAPS 2009, York, UK, March 22-29, 2009. Proceedings 15, pages 174–177. Springer. Cadar, C., Ganesh, V., Pawlowski, P. M., Dill, D. L., and Engler, D. R. (2008). Exe: Automatically generating inputs of death. ACM Transactions on Information and System Security (TISSEC), 12(2):1–38. Cen, L.-P., Ji, J., Lin, J.-W., Ju, S.-T., Lin, H.-J., Li, T.-P., Wang, Y., Yang, J.-F., Liu, Y.-F., Tan, S., et al. (2021). Automatic detection of 39 fundus diseases and conditions in retinal photographs using deep neural networks. Nature communications, 12(1):4828. Chen, H., Lundberg, S. M., and Lee, S.-I. (2022). Explaining a series of models by propagating shapley values. Nature communications, 13(1):4512. Chen, Y.-F., Tsai, W.-L., Wu, W.-C., Yen, D.-D., and Yu, F. (2021). Pyct: A python concolic tester. In Programming Languages and Systems: 19th Asian Symposium, APLAS 2021, Chicago, IL, USA, October 17–18, 2021, Proceedings 19, pages 38–46. Springer. De Moura, L. and Bjørner, N. (2008). Z3: An efficient smt solver. In International conference on Tools and Algorithms for the Construction and Analysis of Systems, pages 337–340. Springer. Elboher, Y. Y., Gottschlich, J., and Katz, G. (2020). An abstraction-based framework for neural network verification. In Computer Aided Verification: 32nd International Conference, CAV 2020, Los Angeles, CA, USA, July 21–24, 2020, Proceedings, Part I 32, pages 43–65. Springer. Gehr, T., Mirman, M., Drachsler-Cohen, D., Tsankov, P., Chaudhuri, S., and Vechev, M. (2018). Ai2: Safety and robustness certification of neural networks with abstract interpretation. In 2018 IEEE symposium on security and privacy (SP), pages 3–18. IEEE. Godefroid, P., Klarlund, N., and Sen, K. (2005). Dart: Directed automated random testing. In Proceedings of the 2005 ACM SIGPLAN conference on Programming language design and implementation, pages 213–223. Goodfellow, I., Bengio, Y., and Courville, A. (2016). Deep learning. MIT press. Goodfellow, I. J., Shlens, J., and Szegedy, C. (2014). Explaining and harnessing adversarial ex- amples. arXiv preprint arXiv:1412.6572. Gopinath, D., Zhang, M., Wang, K., Kadron, I. B., Pasareanu, C., and Khurshid, S. (2019). Sym- bolic execution for importance analysis and adversarial generation in neural networks. In 2019 IEEE 30th International Symposium on Software Reliability Engineering (ISSRE), pages 313– 322. IEEE. Guo, C., Sablayrolles, A., Jégou, H., and Kiela, D. (2021). Gradient-based adversarial attacks against text transformers. arXiv preprint arXiv:2104.13733. He, L., Aouf, N., and Song, B. (2021). Explainable deep reinforcement learning for uav au- tonomous path planning. Aerospace science and technology, 118:107052. Hinton, G., Deng, L., Yu, D., Dahl, G. E., Mohamed, A.-r., Jaitly, N., Senior, A., Vanhoucke, V., Nguyen, P., Sainath, T. N., et al. (2012). Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE Signal processing magazine, 29(6):82–97. Huang, X., Kroening, D., Ruan, W., Sharp, J., Sun, Y., Thamo, E., Wu, M., and Yi, X. (2020). A survey of safety and trustworthiness of deep neural networks: Verification, testing, adversarial attack and defence, and interpretability. Computer Science Review, 37:100270. Katz, G., Barrett, C., Dill, D. L., Julian, K., and Kochenderfer, M. J. (2017). Reluplex: An efficient smt solver for verifying deep neural networks. In Computer Aided Verification: 29th Interna- tional Conference, CAV 2017, Heidelberg, Germany, July 24-28, 2017, Proceedings, Part I 30, pages 97–117. Springer. Katz, G., Huang, D. A., Ibeling, D., Julian, K., Lazarus, C., Lim, R., Shah, P., Thakoor, S., Wu, H., Zeljić, A., et al. (2019). The marabou framework for verification and analysis of deep neural networks. In Computer Aided Verification: 31st International Conference, CAV 2019, New York City, NY, USA, July 15-18, 2019, Proceedings, Part I 31, pages 443–452. Springer. Krizhevsky, A., Sutskever, I., and Hinton, G. E. (2012). Imagenet classification with deep convo- lutional neural networks. Advances in neural information processing systems, 25. Li, Z., Ma, X., Xu, C., and Cao, C. (2019). Structural coverage criteria for neural networks could be misleading. In 2019 IEEE/ACM 41st International Conference on Software Engineering: New Ideas and Emerging Results (ICSE-NIER), pages 89–92. IEEE. Lundberg, S. M. and Lee, S.-I. (2017). A unified approach to interpreting model predictions. Advances in neural information processing systems, 30. Ma, L., Juefei-Xu, F., Zhang, F., Sun, J., Xue, M., Li, B., Chen, C., Su, T., Li, L., Liu, Y., et al. (2018a). Deepgauge: Multi-granularity testing criteria for deep learning systems. In Proceed- ings of the 33rd ACM/IEEE international conference on automated software engineering, pages 120–131. Ma, L., Zhang, F., Sun, J., Xue, M., Li, B., Juefei-Xu, F., Xie, C., Li, L., Liu, Y., Zhao, J., et al. (2018b). Deepmutation: Mutation testing of deep learning systems. In 2018 IEEE 29th inter- national symposium on software reliability engineering (ISSRE), pages 100–111. IEEE. Mirman, M., Gehr, T., and Vechev, M. (2018). Differentiable abstract interpretation for provably robust neural networks. In International Conference on Machine Learning, pages 3578–3586. PMLR. Narodytska, N., Kasiviswanathan, S., Ryzhyk, L., Sagiv, M., and Walsh, T. (2018). Verifying prop- erties of binarized deep neural networks. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 32. Nguyen, A., Yosinski, J., and Clune, J. (2015). Deep neural networks are easily fooled: High confidence predictions for unrecognizable images. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 427–436. Odena, A., Olsson, C., Andersen, D., and Goodfellow, I. (2019). Tensorfuzz: Debugging neural networks with coverage-guided fuzzing. In International Conference on Machine Learning, pages 4901–4911. PMLR. Pei, K., Cao, Y., Yang, J., and Jana, S. (2017). Deepxplore: Automated whitebox testing of deep learning systems. In proceedings of the 26th Symposium on Operating Systems Principles, pages 1–18. Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). ” why should i trust you?” explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Ruan, W., Huang, X., and Kwiatkowska, M. (2018). Reachability analysis of deep neural networks with provable guarantees. arXiv preprint arXiv:1805.02242. Sen, K., Marinov, D., and Agha, G. (2005). Cute: A concolic unit testing engine for c. ACM SIGSOFT Software Engineering Notes, 30(5):263–272. Shrikumar, A., Greenside, P., and Kundaje, A. (2017). Learning important features through prop- agating activation differences. In International conference on machine learning, pages 3145– 3153. PMlR. Silver, D., Huang, A., Maddison, C. J., Guez, A., Sifre, L., Van Den Driessche, G., Schrittwieser, J., Antonoglou, I., Panneershelvam, V., Lanctot, M., et al. (2016). Mastering the game of go with deep neural networks and tree search. nature, 529(7587):484–489. Singh, G., Gehr, T., Mirman, M., Püschel, M., and Vechev, M. (2018). Fast and effective robustness certification. Advances in neural information processing systems, 31. Singh, G., Gehr, T., Püschel, M., and Vechev, M. (2019). An abstract domain for certifying neural networks. Proceedings of the ACM on Programming Languages, 3(POPL):1–30. Su, J., Vargas, D. V., and Sakurai, K. (2019). One pixel attack for fooling deep neural networks. IEEE Transactions on Evolutionary Computation, 23(5):828–841. Sun, Y., Huang, X., Kroening, D., Sharp, J., Hill, M., and Ashmore, R. (2019). Deepconcolic: Testing and debugging deep neural networks. In 2019 IEEE/ACM 41st International Conference on Software Engineering: Companion Proceedings (ICSE-Companion), pages 111–114. IEEE. Sun, Y., Wu, M., Ruan, W., Huang, X., Kwiatkowska, M., and Kroening, D. (2018). Concolic testing for deep neural networks. In Proceedings of the 33rd ACM/IEEE International Conference on Automated Software Engineering, pages 109–119. Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I., and Fergus, R. (2013). Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł., and Polosukhin, I. (2017). Attention is all you need. Advances in neural information processing systems, 30. Wang, S., Pei, K., Whitehouse, J., Yang, J., and Jana, S. (2018). Formal security analysis of neural networks using symbolic intervals. In 27th USENIX Security Symposium (USENIX Security 18), pages 1599–1614. Xie, X., Li, T., Wang, J., Ma, L., Guo, Q., Juefei-Xu, F., and Liu, Y. (2022). Npc: N euron p ath c overage via characterizing decision logic of deep neural networks. ACM Transactions on Software Engineering and Methodology (TOSEM), 31(3):1–27. Yu, F., Chi, Y.-Y., and Chen, Y.-F. (2024). Constraint-based adversarial example synthesis. Zhang, H., Weng, T.-W., Chen, P.-Y., Hsieh, C.-J., and Daniel, L. (2018). Efficient neural net- work robustness certification with general activation functions. Advances in neural information processing systems, 31.
描述 碩士
國立政治大學
資訊管理學系
111356048
資料來源 http://thesis.lib.nccu.edu.tw/record/#G0111356048
資料類型 thesis
dc.contributor.advisor 郁方zh_TW
dc.contributor.advisor Yu, Fangen_US
dc.contributor.author (Authors) 汪妤zh_TW
dc.contributor.author (Authors) Wang, Yuen_US
dc.creator (作者) 汪妤zh_TW
dc.creator (作者) Wang, Yuen_US
dc.date (日期) 2024en_US
dc.date.accessioned 4-Sep-2024 14:06:57 (UTC+8)-
dc.date.available 4-Sep-2024 14:06:57 (UTC+8)-
dc.date.issued (上傳時間) 4-Sep-2024 14:06:57 (UTC+8)-
dc.identifier (Other Identifiers) G0111356048en_US
dc.identifier.uri (URI) https://nccur.lib.nccu.edu.tw/handle/140.119/153166-
dc.description (描述) 碩士zh_TW
dc.description (描述) 國立政治大學zh_TW
dc.description (描述) 資訊管理學系zh_TW
dc.description (描述) 111356048zh_TW
dc.description.abstract (摘要) 我們提出了一種新穎的影響引導並發符號測試方法,旨在生成關鍵測試輸入,以全面探索神經網絡的各種行為。該方法通過利用決策路徑的影響系統地檢查它們,從而優先考慮重要神經元的狀態變化,以優化並發符號測試性能。通過引導SMT求解器集中在最有可能發現漏洞的約束上,我們的方法在重要的決策路徑上進行並發符號測試,從而提高了網絡漏洞檢測的自動輸入生成效率。初步實驗表明,我們的方法在測試CNN和Transformer模型的對抗性示例方面取得了顯著進展。最後,我們提出了一種綜合關鍵決策路徑的方法,並識別出成功攻擊中共享的常見決策邏輯。zh_TW
dc.description.abstract (摘要) We present a novel influence-guided concolic testing methodology aimed at generating critical test inputs to comprehensively explore various neural network behaviors. This approach systematically examines decision paths by leveraging their influence, thereby prioritizing status changes of essential neurons to optimize concolic testing performance. By directing the SMT solver to concentrate on constraints most likely to uncover vulnerabilities, our method incorporates concolic testing on significant decision paths, thereby enhancing the efficiency of automatic input synthesis for network vulnerability detection. Preliminary experiments demonstrate the efficacy of our approach, showing substantial advancements in testing adversarial examples across both CNN and Transformer models. Finally, we propose a method for synthesizing critical decision paths and identify a common decision logic shared among successful attacks.en_US
dc.description.tableofcontents Acknowledgement - I 摘要 - ii Abstract - iii Contents - iv 1 Introduction - 1 2 Related Work - 8 2.1 Automatic Testing on Neural Networks - 8 3 Methodology - 10 3.1 Influence-guided Concolic Testing - 11 3.1.1 Bookkeeping the Associated Neurons - 14 3.1.2 Registering Associated Neurons During Forwarding Computations - 14 3.1.3 Calculating the influence of all neurons - 15 3.1.4 A Branch Handler Pushing Branch Conditions into the Priority Queue by SHAP-based influence - 15 3.1.5 Popping Constraints in Each Iteration - 17 3.2 Testing on Transformer Architecture - 18 3.2.1 Custom MultiHeadAttention Layer - 19 3.2.2 Computation of Dot-Product Attention - 19 3.2.3 Concatenation and Transformation of Attention Heads - 23 3.2.4 Flatten Layer - 24 3.2.5 Dense Layer with Softmax - 25 3.3 SHAP-based Abstract Critical Decision Path Synthesis - 28 4 Experiment - 30 4.1 Experimental Setup - 30 4.2 Identification of Adversarial Cases (RQ1) - 33 4.3 Effectiveness of SHAP-Based Priority in Targeting Critical Decisions (RQ2) - 35 4.4 Trade-Off Between High-Influence and Low-Complexity Constraints (RQ3) - 37 4.5 Common Decision Logic for Adversarial Inputs (RQ4) - 39 5 Discussion - 41 6 Conclusion - 43 Reference - 44zh_TW
dc.format.extent 2424850 bytes-
dc.format.mimetype application/pdf-
dc.source.uri (資料來源) http://thesis.lib.nccu.edu.tw/record/#G0111356048en_US
dc.subject (關鍵詞) 自動單元測試zh_TW
dc.subject (關鍵詞) 並發符號測試zh_TW
dc.subject (關鍵詞) 動態並發符號測試zh_TW
dc.subject (關鍵詞) Pythonzh_TW
dc.subject (關鍵詞) Automatic unit testingen_US
dc.subject (關鍵詞) Concolic testingen_US
dc.subject (關鍵詞) Pythonen_US
dc.subject (關鍵詞) Dynamic concolicationen_US
dc.title (題名) 基於影響指引的動態型式測試技術研究:以Transformer為例zh_TW
dc.title (題名) Influence-Guided Concolic Testing on Transformeren_US
dc.type (資料類型) thesisen_US
dc.relation.reference (參考文獻) Alenezi, R. and Ludwig, S. A. (2021). Explainability of cybersecurity threats data using shap. In 2021 IEEE Symposium Series on Computational Intelligence (SSCI), pages 01–10. IEEE. Bach, S., Binder, A., Montavon, G., Klauschen, F., Müller, K.-R., and Samek, W. (2015). On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PloS one, 10(7):e0130140. Ball, T. and Daniel, J. (2015). Deconstructing dynamic symbolic execution. In Dependable Soft- ware Systems Engineering, pages 26–41. IOS Press. Brummayer, R. and Biere, A. (2009). Boolector: An efficient smt solver for bit-vectors and ar- rays. In Tools and Algorithms for the Construction and Analysis of Systems: 15th International Conference, TACAS 2009, Held as Part of the Joint European Conferences on Theory and Prac- tice of Software, ETAPS 2009, York, UK, March 22-29, 2009. Proceedings 15, pages 174–177. Springer. Cadar, C., Ganesh, V., Pawlowski, P. M., Dill, D. L., and Engler, D. R. (2008). Exe: Automatically generating inputs of death. ACM Transactions on Information and System Security (TISSEC), 12(2):1–38. Cen, L.-P., Ji, J., Lin, J.-W., Ju, S.-T., Lin, H.-J., Li, T.-P., Wang, Y., Yang, J.-F., Liu, Y.-F., Tan, S., et al. (2021). Automatic detection of 39 fundus diseases and conditions in retinal photographs using deep neural networks. Nature communications, 12(1):4828. Chen, H., Lundberg, S. M., and Lee, S.-I. (2022). Explaining a series of models by propagating shapley values. Nature communications, 13(1):4512. Chen, Y.-F., Tsai, W.-L., Wu, W.-C., Yen, D.-D., and Yu, F. (2021). Pyct: A python concolic tester. In Programming Languages and Systems: 19th Asian Symposium, APLAS 2021, Chicago, IL, USA, October 17–18, 2021, Proceedings 19, pages 38–46. Springer. De Moura, L. and Bjørner, N. (2008). Z3: An efficient smt solver. In International conference on Tools and Algorithms for the Construction and Analysis of Systems, pages 337–340. Springer. Elboher, Y. Y., Gottschlich, J., and Katz, G. (2020). An abstraction-based framework for neural network verification. In Computer Aided Verification: 32nd International Conference, CAV 2020, Los Angeles, CA, USA, July 21–24, 2020, Proceedings, Part I 32, pages 43–65. Springer. Gehr, T., Mirman, M., Drachsler-Cohen, D., Tsankov, P., Chaudhuri, S., and Vechev, M. (2018). Ai2: Safety and robustness certification of neural networks with abstract interpretation. In 2018 IEEE symposium on security and privacy (SP), pages 3–18. IEEE. Godefroid, P., Klarlund, N., and Sen, K. (2005). Dart: Directed automated random testing. In Proceedings of the 2005 ACM SIGPLAN conference on Programming language design and implementation, pages 213–223. Goodfellow, I., Bengio, Y., and Courville, A. (2016). Deep learning. MIT press. Goodfellow, I. J., Shlens, J., and Szegedy, C. (2014). Explaining and harnessing adversarial ex- amples. arXiv preprint arXiv:1412.6572. Gopinath, D., Zhang, M., Wang, K., Kadron, I. B., Pasareanu, C., and Khurshid, S. (2019). Sym- bolic execution for importance analysis and adversarial generation in neural networks. In 2019 IEEE 30th International Symposium on Software Reliability Engineering (ISSRE), pages 313– 322. IEEE. Guo, C., Sablayrolles, A., Jégou, H., and Kiela, D. (2021). Gradient-based adversarial attacks against text transformers. arXiv preprint arXiv:2104.13733. He, L., Aouf, N., and Song, B. (2021). Explainable deep reinforcement learning for uav au- tonomous path planning. Aerospace science and technology, 118:107052. Hinton, G., Deng, L., Yu, D., Dahl, G. E., Mohamed, A.-r., Jaitly, N., Senior, A., Vanhoucke, V., Nguyen, P., Sainath, T. N., et al. (2012). Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE Signal processing magazine, 29(6):82–97. Huang, X., Kroening, D., Ruan, W., Sharp, J., Sun, Y., Thamo, E., Wu, M., and Yi, X. (2020). A survey of safety and trustworthiness of deep neural networks: Verification, testing, adversarial attack and defence, and interpretability. Computer Science Review, 37:100270. Katz, G., Barrett, C., Dill, D. L., Julian, K., and Kochenderfer, M. J. (2017). Reluplex: An efficient smt solver for verifying deep neural networks. In Computer Aided Verification: 29th Interna- tional Conference, CAV 2017, Heidelberg, Germany, July 24-28, 2017, Proceedings, Part I 30, pages 97–117. Springer. Katz, G., Huang, D. A., Ibeling, D., Julian, K., Lazarus, C., Lim, R., Shah, P., Thakoor, S., Wu, H., Zeljić, A., et al. (2019). The marabou framework for verification and analysis of deep neural networks. In Computer Aided Verification: 31st International Conference, CAV 2019, New York City, NY, USA, July 15-18, 2019, Proceedings, Part I 31, pages 443–452. Springer. Krizhevsky, A., Sutskever, I., and Hinton, G. E. (2012). Imagenet classification with deep convo- lutional neural networks. Advances in neural information processing systems, 25. Li, Z., Ma, X., Xu, C., and Cao, C. (2019). Structural coverage criteria for neural networks could be misleading. In 2019 IEEE/ACM 41st International Conference on Software Engineering: New Ideas and Emerging Results (ICSE-NIER), pages 89–92. IEEE. Lundberg, S. M. and Lee, S.-I. (2017). A unified approach to interpreting model predictions. Advances in neural information processing systems, 30. Ma, L., Juefei-Xu, F., Zhang, F., Sun, J., Xue, M., Li, B., Chen, C., Su, T., Li, L., Liu, Y., et al. (2018a). Deepgauge: Multi-granularity testing criteria for deep learning systems. In Proceed- ings of the 33rd ACM/IEEE international conference on automated software engineering, pages 120–131. Ma, L., Zhang, F., Sun, J., Xue, M., Li, B., Juefei-Xu, F., Xie, C., Li, L., Liu, Y., Zhao, J., et al. (2018b). Deepmutation: Mutation testing of deep learning systems. In 2018 IEEE 29th inter- national symposium on software reliability engineering (ISSRE), pages 100–111. IEEE. Mirman, M., Gehr, T., and Vechev, M. (2018). Differentiable abstract interpretation for provably robust neural networks. In International Conference on Machine Learning, pages 3578–3586. PMLR. Narodytska, N., Kasiviswanathan, S., Ryzhyk, L., Sagiv, M., and Walsh, T. (2018). Verifying prop- erties of binarized deep neural networks. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 32. Nguyen, A., Yosinski, J., and Clune, J. (2015). Deep neural networks are easily fooled: High confidence predictions for unrecognizable images. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 427–436. Odena, A., Olsson, C., Andersen, D., and Goodfellow, I. (2019). Tensorfuzz: Debugging neural networks with coverage-guided fuzzing. In International Conference on Machine Learning, pages 4901–4911. PMLR. Pei, K., Cao, Y., Yang, J., and Jana, S. (2017). Deepxplore: Automated whitebox testing of deep learning systems. In proceedings of the 26th Symposium on Operating Systems Principles, pages 1–18. Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). ” why should i trust you?” explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Ruan, W., Huang, X., and Kwiatkowska, M. (2018). Reachability analysis of deep neural networks with provable guarantees. arXiv preprint arXiv:1805.02242. Sen, K., Marinov, D., and Agha, G. (2005). Cute: A concolic unit testing engine for c. ACM SIGSOFT Software Engineering Notes, 30(5):263–272. Shrikumar, A., Greenside, P., and Kundaje, A. (2017). Learning important features through prop- agating activation differences. In International conference on machine learning, pages 3145– 3153. PMlR. Silver, D., Huang, A., Maddison, C. J., Guez, A., Sifre, L., Van Den Driessche, G., Schrittwieser, J., Antonoglou, I., Panneershelvam, V., Lanctot, M., et al. (2016). Mastering the game of go with deep neural networks and tree search. nature, 529(7587):484–489. Singh, G., Gehr, T., Mirman, M., Püschel, M., and Vechev, M. (2018). Fast and effective robustness certification. Advances in neural information processing systems, 31. Singh, G., Gehr, T., Püschel, M., and Vechev, M. (2019). An abstract domain for certifying neural networks. Proceedings of the ACM on Programming Languages, 3(POPL):1–30. Su, J., Vargas, D. V., and Sakurai, K. (2019). One pixel attack for fooling deep neural networks. IEEE Transactions on Evolutionary Computation, 23(5):828–841. Sun, Y., Huang, X., Kroening, D., Sharp, J., Hill, M., and Ashmore, R. (2019). Deepconcolic: Testing and debugging deep neural networks. In 2019 IEEE/ACM 41st International Conference on Software Engineering: Companion Proceedings (ICSE-Companion), pages 111–114. IEEE. Sun, Y., Wu, M., Ruan, W., Huang, X., Kwiatkowska, M., and Kroening, D. (2018). Concolic testing for deep neural networks. In Proceedings of the 33rd ACM/IEEE International Conference on Automated Software Engineering, pages 109–119. Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I., and Fergus, R. (2013). Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł., and Polosukhin, I. (2017). Attention is all you need. Advances in neural information processing systems, 30. Wang, S., Pei, K., Whitehouse, J., Yang, J., and Jana, S. (2018). Formal security analysis of neural networks using symbolic intervals. In 27th USENIX Security Symposium (USENIX Security 18), pages 1599–1614. Xie, X., Li, T., Wang, J., Ma, L., Guo, Q., Juefei-Xu, F., and Liu, Y. (2022). Npc: N euron p ath c overage via characterizing decision logic of deep neural networks. ACM Transactions on Software Engineering and Methodology (TOSEM), 31(3):1–27. Yu, F., Chi, Y.-Y., and Chen, Y.-F. (2024). Constraint-based adversarial example synthesis. Zhang, H., Weng, T.-W., Chen, P.-Y., Hsieh, C.-J., and Daniel, L. (2018). Efficient neural net- work robustness certification with general activation functions. Advances in neural information processing systems, 31.zh_TW