學術產出-Theses

Article View/Open

Publication Export

Google ScholarTM

政大圖書館

Citation Infomation

題名 對抗神經網路之執行路徑差異分析研究
Differential Analysis on Adversarial Neural Network Executions
作者 陳怡君
Chen, Yi-Chun
貢獻者 郁方
Yu, Fang
陳怡君
Chen, Yi-Chun
關鍵詞 對抗性機器學習
可解釋人工智慧
神經網路執行路徑
差異分析
Adversarial machine learning
Explainable AI
Neural network execution
Differential analysis
日期 2021
上傳時間 2-Sep-2021 15:51:45 (UTC+8)
摘要 雖然卷積神經網絡(CNN)已在影像識別方面有非常成功的進展,被廣泛應用於蓬勃發展的機器學習領域,但對抗性機器學習的研究表明,人們可以針對 CNN 模型控制輸入內容,導致模型得到錯誤的結果。
在本研究中,我們探索了對抗性圖片和正常圖片之間的神經網路執行差異,目的是希望推導出一種可解釋的方法,從程式執行的角度分析對抗性方法如何攻擊 CNN 模型。
我們針對 Keras Application 中的 CNN 模型來進行差異分析,在正常圖片合成對抗性補丁,便可成功攻擊模型,改變輸出的結果,可在 VGG16、VGG19、Resnet50、InceptionV3 和 Xception 共五種模型上攻擊成功。
我們利用 python 分析器在程式執行過程追蹤執行函數的參數、返回值、執行時間等,通過分析 Python 的程式執行路徑,我們可以根據不同的標準比較對抗性圖片和正常圖片的執行差異:計算不同的函數調用次數、發現不同的參數和返回值、測量性能差異,例如時間和內存消耗。
本研究結果報告了針對 Keras Application和物件偵測應用程式兩種模型,比較各種對抗性圖片和一般圖片之間的執行差異,從差異中,我們能夠推導出有效的規則來區分原始圖片和插入對象的圖片,但沒有發現有效的規則來區分對抗性圖片和一般圖片。
While Convolutional Neural Networks (CNNs) have been widely adopted in the booming state-of-the-art machine learning applications since their great success in image recognition, the study of adversarial machine learning has shown that one can manipulate the input against a CNN model leading the model to conclude incorrect results.
In this study, we explore the execution differences among adversarial and normal samples with the aim of deriving an explainable method to analyze how the adversarial methods attack the CNN models from the perspective of the program execution.
We target the CNN models of Keras applications to conduct our differential analysis against normal and adversarial examples. These models that can be successfully attacked by adversarial patches include VGG16, VGG19, Resnet50, InceptionV3, and Xception. We synthesize adversarial patches that can twist their image recognition results.
We leverage a python profiler to trace deep opcode executions with parameters and return values at runtime.
By profiling Python program execution, we can compare adversarial and normal executions in deep opcode levels against different criteria: counting different number of function calls, discovering different arguments and return values, and measuring performance differences such as time and memory consumption.
We report the execution differences among various adversarial and general samples against Keras applications and object detection applications.
From the differences, we are able to derive effective rules to distinguish original pictures and pictures with inserted objects, but no effective rules to distinguish adversarial patches and objects that twist the image recognition results.
參考文獻 [1]Google cloud explainable ai. https://cloud.google.com/explainable-ai.
[2]Alexandra Silva, K. R. M. L.Computer Aided Verification. Springer, Cham,2021.
[3]Arya, V., Bellamy, R. K., Chen, P.-Y., Dhurandhar, A., Hind, M., Hoff-man, S. C., Houde, S., Liao, Q. V., Luss, R., Mojsilovic, A., et al. Aiexplainability 360: An extensible toolkit for understanding data and machine learning models. Journal of Machine Learning Research 21, 130 (2020), 1–6.
[4]Baluja, S., and Fischer, I. Learning to attack: Adversarial transformation networks. In Proceedings of the AAAI Conference on Artificial Intelligence(2018).
[5]Bhambri, S., Muku, S., Tulasi, A., and Buduru, A. B.A survey of black-box adversarial attacks on computer vision models, 2019.
[6]Brown, T. B., Man ́e, D., Roy, A., Abadi, M., and Gilmer, J.Adversarialpatch.ArXiv abs/1712.09665(2017).
[7]Chattopadhay, A., Sarkar, A., Howlader, P., and Balasubramanian, V. N. Grad-cam++: Generalized gradient-based visual explanations for deep convolutional networks. In 2018 IEEE Winter Conference on Applications of ComputerVision (WACV)(2018), pp. 839–847.
[8]Chen, S.-T., Cornelius, C., Martin, J., and Chau, D. H. P. Shapeshifter: Robust physical adversarial attack on faster r-cnn object detector. In Joint European conference on Machine Learning and Knowledge Discovery in Databases(2018), Springer, pp. 52–68.
[9]Cheng, M., Yi, J., Chen, P.-Y., Zhang, H., and Hsieh, C.-J.Seq2sick: Evaluating the robustness of sequence-to-sequence models with adversarial examples.InAAAI(2020), pp. 3601–3608.
[10]Chollet, F.Xception: Deep learning with depthwise separable convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition(2017), pp. 1251–1258.
[11]Doshi-Velez, F., and Kim, B.Towards a rigorous science of interpretable machine learning.arXiv preprint arXiv:1702.08608(2017).
[12]Gehr, T., Mirman, M., Drachsler-Cohen, D., Tsankov, P., Chaudhuri, S., and Vechev, M.Ai2: Safety and robustness certification of neural networks with abstract interpretation. In2018 IEEE Symposium on Security and Privacy (SP)(2018), IEEE, pp. 3–18.
[13]Goodfellow, I., Shlens, J., and Szegedy, C.Explaining and harnessing adversarial examples. In International Conference on Learning Representations(2015).
[14]Goswami, G., Ratha, N., Agarwal, A., Singh, R., and Vatsa, M.Unravelling robustness of deep learning based face recognition against adversarial attacks.InProceedings of the AAAI Conference on Artificial Intelligence(2018).
[15]Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., and Pedreschi, D.A survey of methods for explaining black box models. ACM computing surveys (CSUR) 51, 5 (2018), 1–42.
[16]He, K., Zhang, X., Ren, S., and Sun, J.Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition(2016), pp. 770–778.
[17]Jan, S. T., Messou, J., Lin, Y.-C., Huang, J.-B., and Wang, G.Connect-ing the digital and physical world: Improving the robustness of adversarial attacks. In Proceedings of the AAAI Conference on Artificial Intelligence(2019), vol. 33, pp. 962–969.
[18]Krizhevsky, A., Sutskever, I., and Hinton, G. E.Imagenet classification with deep convolutional neural networks. In Advances in Neural Information Pro-cessing Systems(2012), F. Pereira, C. J. C. Burges, L. Bottou, and K. Q. Weinberger, Eds., vol. 25, Curran Associates, Inc.
[19]Kurakin, A., Goodfellow, I. J., and Bengio, S.Adversarial examples in the physical world.ArXiv abs/1607.02533(2016).
[20]Li, O., Liu, H., Chen, C., and Rudin, C.Deep learning for case-based reasoning through prototypes: A neural network that explains its predictions. In Proceedings of the AAAI Conference on Artificial Intelligence(2018).
[21]Li, Y., Bai, S., Zhou, Y., Xie, C., Zhang, Z., and Yuille, A. L.Learning transferable adversarial examples via ghost networks. In AAAI(2020), pp. 11458–11465.
[22]Liu, A., Liu, X., Fan, J., Ma, Y., Zhang, A., Xie, H., and Tao, D.Perceptual-sensitive gan for generating adversarial patches. In Proceedings of the AAAI Conference on Artificial Intelligence(2019), vol. 33, pp. 1028–1035.
[23]Luo, B., Liu, Y., Wei, L., and Xu, Q.Towards imperceptible and robust adversarial example attacks against neural networks. In Proceedings of the AAAI Conference on Artificial Intelligence(2018).
[24]Madry, A., Makelov, A., Schmidt, L., Tsipras, D., and Vladu, A.Towardsdeep learning models resistant to adversarial attacks, 2017.
[25]Moosavi-Dezfooli, S.-M., Fawzi, A., and Frossard, P.Deepfool: A simple and accurate method to fool deep neural networks.2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)(2015), 2574–2582.
[26]Noller, Y., P ̆as ̆areanu, C. S., B ̈ohme, M., Sun, Y., Nguyen, H. L., and Grunske, L.Hydiff: Hybrid differential software analysis. In 2020 IEEE/ACM42nd International Conference on Software Engineering (ICSE)(2020), pp. 1273–1285.
[27]Nori, H., Jenkins, S., Koch, P., and Caruana, R.Interpretml: A unified framework for machine learning interpretability. arXiv preprint arXiv:1909.09223(2019).
[28]Papernot, N., McDaniel, P., Goodfellow, I., Jha, S., Celik, Z. B., andSwami, A.Practical black-box attacks against machine learning. In Proceedings of the 2017 ACM on Asia conference on computer and communications security(2017), pp. 506–519.
[29]Papernot, N., McDaniel, P., Jha, S., Fredrikson, M., Celik, Z. B., andSwami, A.The limitations of deep learning in adversarial settings. In 2016 IEEE European symposium on security and privacy (EuroS&P)(2016), IEEE, pp. 372–387.
[30]Paulsen, B., Wang, J., and Wang, C.Reludiff: Differential verification of deep neural networks. In Proceedings of the ACM/IEEE 42nd International Conference on Software Engineering(New York, NY, USA, 2020), ICSE ’20, Association for Computing Machinery, p. 714–726.
[31]Ribeiro, M. T., Singh, S., and Guestrin, C. ”why should I trust you?” explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDDinternational conference on knowledge discovery and data mining(2016), pp. 1135–1144.
[32]Ross, A., and Doshi-Velez, F. Improving the adversarial robustness and interpretability of deep neural networks by regularizing their input gradients. In Proceedings of the AAAI Conference on Artificial Intelligence(2018).
[33]Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al. Imagenet large scale visual recognition challenge. International journal of computer vision115, 3 (2015), 211–252.
[34]Sermanet, P., Eigen, D., Zhang, X., Mathieu, M., Fergus, R., and Le-Cun, Y.Overfeat: Integrated recognition, localization and detection using convolutional networks.arXiv preprint arXiv:1312.6229(2013).
[35]Sharif, M., Bhagavatula, S., Bauer, L., and Reiter, M. K.Accessorize to a crime: Real and stealthy attacks on state-of-the-art face recognition. In proceedings of the 2016 ACM sigsac conference on computer and communications security(2016), pp. 1528–1540.
[36]Shriver, D., Elbaum, S., and Dwyer, M. B.Dnnv: A framework for deep neural network verification.arXiv preprint arXiv:2105.12841(2021).
[37]Simonyan, K., and Zisserman, A.Very deep convolutional networks for large-scale image recognition. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings (2015), Y. Bengio and Y. LeCun, Eds.
[38]Singh, G., Ganvir, R., P ̈uschel, M., and Vechev, M.Beyond the single neuron convex barrier for neural network certification.
[39]Singh, G., Gehr, T., Mirman, M., P ̈uschel, M., and Vechev, M. T.Fastand effective robustness certification.NeurIPS 1, 4 (2018), 6.
[40]Singh, G., Gehr, T., P ̈uschel, M., and Vechev, M.An abstract domain for certifying neural networks. Proceedings of the ACM on Programming Languages 3, POPL (2019), 1–30.
[41]Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Er-han, D., Vanhoucke, V., and Rabinovich, A.Going deeper with convolutions.In Proceedings of the IEEE conference on computer vision and pattern recognition(2015), pp. 1–9.
[42]Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Good-fellow, I. J., and Fergus, R.Intriguing properties of neural networks.CoRRabs/1312.6199(2013).
[43]Tian, S., Yang, G., and Cai, Y.Detecting adversarial examples through image transformation. In Proceedings of the AAAI Conference on Artificial Intelligence(2018).
[44]Tram"er, F., Kurakin, A., Papernot, N., Goodfellow, I., Boneh, D., and McDaniel, P.Ensemble adversarial training: Attacks and defenses. In International Conference on Learning Representations(2018).
[45]Wikipedia contributors. Digital image processing — Wikipedia, the free encyclopedia, 2021. [Online; accessed 4-March-2021].
[46]Xie, X., Ma, L., Wang, H., Li, Y., Liu, Y., and Li, X.Diffchaser: Detectingdisagreements for deep neural networks. InProceedings of the Twenty-Eighth Inter-national Joint Conference on Artificial Intelligence, IJCAI-19(7 2019), InternationalJoint Conferences on Artificial Intelligence Organization, pp. 5772–5778.
[47]Yang, P., Chen, J., Hsieh, C.-J., Wang, J.-L., and Jordan, M. I.Ml-loo: Detecting adversarial examples with feature attribution. In AAAI(2020), pp. 6639–6647.
[48]Zhang, H., Li, Z., Li, G., Ma, L., Liu, Y., and Jin, Z.Generating adversarial examples for holding robustness of source code processing models. In Proceedings of the AAAI Conference on Artificial Intelligence(2020), pp. 1169–1176.
[49]Zheng, T., Chen, C., and Ren, K.Distributionally adversarial attack. In Proceedings of the AAAI Conference on Artificial Intelligence(2019), vol. 33, pp. 2253–2260.
描述 碩士
國立政治大學
資訊管理學系
108356016
資料來源 http://thesis.lib.nccu.edu.tw/record/#G0108356016
資料類型 thesis
dc.contributor.advisor 郁方zh_TW
dc.contributor.advisor Yu, Fangen_US
dc.contributor.author (Authors) 陳怡君zh_TW
dc.contributor.author (Authors) Chen, Yi-Chunen_US
dc.creator (作者) 陳怡君zh_TW
dc.creator (作者) Chen, Yi-Chunen_US
dc.date (日期) 2021en_US
dc.date.accessioned 2-Sep-2021 15:51:45 (UTC+8)-
dc.date.available 2-Sep-2021 15:51:45 (UTC+8)-
dc.date.issued (上傳時間) 2-Sep-2021 15:51:45 (UTC+8)-
dc.identifier (Other Identifiers) G0108356016en_US
dc.identifier.uri (URI) http://nccur.lib.nccu.edu.tw/handle/140.119/136842-
dc.description (描述) 碩士zh_TW
dc.description (描述) 國立政治大學zh_TW
dc.description (描述) 資訊管理學系zh_TW
dc.description (描述) 108356016zh_TW
dc.description.abstract (摘要) 雖然卷積神經網絡(CNN)已在影像識別方面有非常成功的進展,被廣泛應用於蓬勃發展的機器學習領域,但對抗性機器學習的研究表明,人們可以針對 CNN 模型控制輸入內容,導致模型得到錯誤的結果。
在本研究中,我們探索了對抗性圖片和正常圖片之間的神經網路執行差異,目的是希望推導出一種可解釋的方法,從程式執行的角度分析對抗性方法如何攻擊 CNN 模型。
我們針對 Keras Application 中的 CNN 模型來進行差異分析,在正常圖片合成對抗性補丁,便可成功攻擊模型,改變輸出的結果,可在 VGG16、VGG19、Resnet50、InceptionV3 和 Xception 共五種模型上攻擊成功。
我們利用 python 分析器在程式執行過程追蹤執行函數的參數、返回值、執行時間等,通過分析 Python 的程式執行路徑,我們可以根據不同的標準比較對抗性圖片和正常圖片的執行差異:計算不同的函數調用次數、發現不同的參數和返回值、測量性能差異,例如時間和內存消耗。
本研究結果報告了針對 Keras Application和物件偵測應用程式兩種模型,比較各種對抗性圖片和一般圖片之間的執行差異,從差異中,我們能夠推導出有效的規則來區分原始圖片和插入對象的圖片,但沒有發現有效的規則來區分對抗性圖片和一般圖片。
zh_TW
dc.description.abstract (摘要) While Convolutional Neural Networks (CNNs) have been widely adopted in the booming state-of-the-art machine learning applications since their great success in image recognition, the study of adversarial machine learning has shown that one can manipulate the input against a CNN model leading the model to conclude incorrect results.
In this study, we explore the execution differences among adversarial and normal samples with the aim of deriving an explainable method to analyze how the adversarial methods attack the CNN models from the perspective of the program execution.
We target the CNN models of Keras applications to conduct our differential analysis against normal and adversarial examples. These models that can be successfully attacked by adversarial patches include VGG16, VGG19, Resnet50, InceptionV3, and Xception. We synthesize adversarial patches that can twist their image recognition results.
We leverage a python profiler to trace deep opcode executions with parameters and return values at runtime.
By profiling Python program execution, we can compare adversarial and normal executions in deep opcode levels against different criteria: counting different number of function calls, discovering different arguments and return values, and measuring performance differences such as time and memory consumption.
We report the execution differences among various adversarial and general samples against Keras applications and object detection applications.
From the differences, we are able to derive effective rules to distinguish original pictures and pictures with inserted objects, but no effective rules to distinguish adversarial patches and objects that twist the image recognition results.
en_US
dc.description.tableofcontents 1 Introduction 1
2 Related Work 3
2.1 Keras Applications 3
2.2 Adversarial Machine Learning 4
2.3 Explainable AI 7
2.4 Neural Network Verification 8
3 Methodology 10
3.1 Craft Adversarial Samples 10
3.2 Python Script Profiling 12
3.3 Call Sequence Construction 14
3.4 Differential Analysis 17
3.4.1 Call Count Difference 19
3.4.2 Execution Time Difference 21
3.4.3 Output Difference 24
4 Keras Application Analysis 29
4.1 Function calls 29
4.2 Execution time 36
4.3 Return value 43
5 Object Detection Analysis 48
5.1 Differential analysis 49
5.2 Rule derivation and evaluation 61
6 Conclusion 63
References 65
zh_TW
dc.format.extent 15061099 bytes-
dc.format.mimetype application/pdf-
dc.source.uri (資料來源) http://thesis.lib.nccu.edu.tw/record/#G0108356016en_US
dc.subject (關鍵詞) 對抗性機器學習zh_TW
dc.subject (關鍵詞) 可解釋人工智慧zh_TW
dc.subject (關鍵詞) 神經網路執行路徑zh_TW
dc.subject (關鍵詞) 差異分析zh_TW
dc.subject (關鍵詞) Adversarial machine learningen_US
dc.subject (關鍵詞) Explainable AIen_US
dc.subject (關鍵詞) Neural network executionen_US
dc.subject (關鍵詞) Differential analysisen_US
dc.title (題名) 對抗神經網路之執行路徑差異分析研究zh_TW
dc.title (題名) Differential Analysis on Adversarial Neural Network Executionsen_US
dc.type (資料類型) thesisen_US
dc.relation.reference (參考文獻) [1]Google cloud explainable ai. https://cloud.google.com/explainable-ai.
[2]Alexandra Silva, K. R. M. L.Computer Aided Verification. Springer, Cham,2021.
[3]Arya, V., Bellamy, R. K., Chen, P.-Y., Dhurandhar, A., Hind, M., Hoff-man, S. C., Houde, S., Liao, Q. V., Luss, R., Mojsilovic, A., et al. Aiexplainability 360: An extensible toolkit for understanding data and machine learning models. Journal of Machine Learning Research 21, 130 (2020), 1–6.
[4]Baluja, S., and Fischer, I. Learning to attack: Adversarial transformation networks. In Proceedings of the AAAI Conference on Artificial Intelligence(2018).
[5]Bhambri, S., Muku, S., Tulasi, A., and Buduru, A. B.A survey of black-box adversarial attacks on computer vision models, 2019.
[6]Brown, T. B., Man ́e, D., Roy, A., Abadi, M., and Gilmer, J.Adversarialpatch.ArXiv abs/1712.09665(2017).
[7]Chattopadhay, A., Sarkar, A., Howlader, P., and Balasubramanian, V. N. Grad-cam++: Generalized gradient-based visual explanations for deep convolutional networks. In 2018 IEEE Winter Conference on Applications of ComputerVision (WACV)(2018), pp. 839–847.
[8]Chen, S.-T., Cornelius, C., Martin, J., and Chau, D. H. P. Shapeshifter: Robust physical adversarial attack on faster r-cnn object detector. In Joint European conference on Machine Learning and Knowledge Discovery in Databases(2018), Springer, pp. 52–68.
[9]Cheng, M., Yi, J., Chen, P.-Y., Zhang, H., and Hsieh, C.-J.Seq2sick: Evaluating the robustness of sequence-to-sequence models with adversarial examples.InAAAI(2020), pp. 3601–3608.
[10]Chollet, F.Xception: Deep learning with depthwise separable convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition(2017), pp. 1251–1258.
[11]Doshi-Velez, F., and Kim, B.Towards a rigorous science of interpretable machine learning.arXiv preprint arXiv:1702.08608(2017).
[12]Gehr, T., Mirman, M., Drachsler-Cohen, D., Tsankov, P., Chaudhuri, S., and Vechev, M.Ai2: Safety and robustness certification of neural networks with abstract interpretation. In2018 IEEE Symposium on Security and Privacy (SP)(2018), IEEE, pp. 3–18.
[13]Goodfellow, I., Shlens, J., and Szegedy, C.Explaining and harnessing adversarial examples. In International Conference on Learning Representations(2015).
[14]Goswami, G., Ratha, N., Agarwal, A., Singh, R., and Vatsa, M.Unravelling robustness of deep learning based face recognition against adversarial attacks.InProceedings of the AAAI Conference on Artificial Intelligence(2018).
[15]Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., and Pedreschi, D.A survey of methods for explaining black box models. ACM computing surveys (CSUR) 51, 5 (2018), 1–42.
[16]He, K., Zhang, X., Ren, S., and Sun, J.Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition(2016), pp. 770–778.
[17]Jan, S. T., Messou, J., Lin, Y.-C., Huang, J.-B., and Wang, G.Connect-ing the digital and physical world: Improving the robustness of adversarial attacks. In Proceedings of the AAAI Conference on Artificial Intelligence(2019), vol. 33, pp. 962–969.
[18]Krizhevsky, A., Sutskever, I., and Hinton, G. E.Imagenet classification with deep convolutional neural networks. In Advances in Neural Information Pro-cessing Systems(2012), F. Pereira, C. J. C. Burges, L. Bottou, and K. Q. Weinberger, Eds., vol. 25, Curran Associates, Inc.
[19]Kurakin, A., Goodfellow, I. J., and Bengio, S.Adversarial examples in the physical world.ArXiv abs/1607.02533(2016).
[20]Li, O., Liu, H., Chen, C., and Rudin, C.Deep learning for case-based reasoning through prototypes: A neural network that explains its predictions. In Proceedings of the AAAI Conference on Artificial Intelligence(2018).
[21]Li, Y., Bai, S., Zhou, Y., Xie, C., Zhang, Z., and Yuille, A. L.Learning transferable adversarial examples via ghost networks. In AAAI(2020), pp. 11458–11465.
[22]Liu, A., Liu, X., Fan, J., Ma, Y., Zhang, A., Xie, H., and Tao, D.Perceptual-sensitive gan for generating adversarial patches. In Proceedings of the AAAI Conference on Artificial Intelligence(2019), vol. 33, pp. 1028–1035.
[23]Luo, B., Liu, Y., Wei, L., and Xu, Q.Towards imperceptible and robust adversarial example attacks against neural networks. In Proceedings of the AAAI Conference on Artificial Intelligence(2018).
[24]Madry, A., Makelov, A., Schmidt, L., Tsipras, D., and Vladu, A.Towardsdeep learning models resistant to adversarial attacks, 2017.
[25]Moosavi-Dezfooli, S.-M., Fawzi, A., and Frossard, P.Deepfool: A simple and accurate method to fool deep neural networks.2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)(2015), 2574–2582.
[26]Noller, Y., P ̆as ̆areanu, C. S., B ̈ohme, M., Sun, Y., Nguyen, H. L., and Grunske, L.Hydiff: Hybrid differential software analysis. In 2020 IEEE/ACM42nd International Conference on Software Engineering (ICSE)(2020), pp. 1273–1285.
[27]Nori, H., Jenkins, S., Koch, P., and Caruana, R.Interpretml: A unified framework for machine learning interpretability. arXiv preprint arXiv:1909.09223(2019).
[28]Papernot, N., McDaniel, P., Goodfellow, I., Jha, S., Celik, Z. B., andSwami, A.Practical black-box attacks against machine learning. In Proceedings of the 2017 ACM on Asia conference on computer and communications security(2017), pp. 506–519.
[29]Papernot, N., McDaniel, P., Jha, S., Fredrikson, M., Celik, Z. B., andSwami, A.The limitations of deep learning in adversarial settings. In 2016 IEEE European symposium on security and privacy (EuroS&P)(2016), IEEE, pp. 372–387.
[30]Paulsen, B., Wang, J., and Wang, C.Reludiff: Differential verification of deep neural networks. In Proceedings of the ACM/IEEE 42nd International Conference on Software Engineering(New York, NY, USA, 2020), ICSE ’20, Association for Computing Machinery, p. 714–726.
[31]Ribeiro, M. T., Singh, S., and Guestrin, C. ”why should I trust you?” explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDDinternational conference on knowledge discovery and data mining(2016), pp. 1135–1144.
[32]Ross, A., and Doshi-Velez, F. Improving the adversarial robustness and interpretability of deep neural networks by regularizing their input gradients. In Proceedings of the AAAI Conference on Artificial Intelligence(2018).
[33]Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al. Imagenet large scale visual recognition challenge. International journal of computer vision115, 3 (2015), 211–252.
[34]Sermanet, P., Eigen, D., Zhang, X., Mathieu, M., Fergus, R., and Le-Cun, Y.Overfeat: Integrated recognition, localization and detection using convolutional networks.arXiv preprint arXiv:1312.6229(2013).
[35]Sharif, M., Bhagavatula, S., Bauer, L., and Reiter, M. K.Accessorize to a crime: Real and stealthy attacks on state-of-the-art face recognition. In proceedings of the 2016 ACM sigsac conference on computer and communications security(2016), pp. 1528–1540.
[36]Shriver, D., Elbaum, S., and Dwyer, M. B.Dnnv: A framework for deep neural network verification.arXiv preprint arXiv:2105.12841(2021).
[37]Simonyan, K., and Zisserman, A.Very deep convolutional networks for large-scale image recognition. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings (2015), Y. Bengio and Y. LeCun, Eds.
[38]Singh, G., Ganvir, R., P ̈uschel, M., and Vechev, M.Beyond the single neuron convex barrier for neural network certification.
[39]Singh, G., Gehr, T., Mirman, M., P ̈uschel, M., and Vechev, M. T.Fastand effective robustness certification.NeurIPS 1, 4 (2018), 6.
[40]Singh, G., Gehr, T., P ̈uschel, M., and Vechev, M.An abstract domain for certifying neural networks. Proceedings of the ACM on Programming Languages 3, POPL (2019), 1–30.
[41]Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Er-han, D., Vanhoucke, V., and Rabinovich, A.Going deeper with convolutions.In Proceedings of the IEEE conference on computer vision and pattern recognition(2015), pp. 1–9.
[42]Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Good-fellow, I. J., and Fergus, R.Intriguing properties of neural networks.CoRRabs/1312.6199(2013).
[43]Tian, S., Yang, G., and Cai, Y.Detecting adversarial examples through image transformation. In Proceedings of the AAAI Conference on Artificial Intelligence(2018).
[44]Tram"er, F., Kurakin, A., Papernot, N., Goodfellow, I., Boneh, D., and McDaniel, P.Ensemble adversarial training: Attacks and defenses. In International Conference on Learning Representations(2018).
[45]Wikipedia contributors. Digital image processing — Wikipedia, the free encyclopedia, 2021. [Online; accessed 4-March-2021].
[46]Xie, X., Ma, L., Wang, H., Li, Y., Liu, Y., and Li, X.Diffchaser: Detectingdisagreements for deep neural networks. InProceedings of the Twenty-Eighth Inter-national Joint Conference on Artificial Intelligence, IJCAI-19(7 2019), InternationalJoint Conferences on Artificial Intelligence Organization, pp. 5772–5778.
[47]Yang, P., Chen, J., Hsieh, C.-J., Wang, J.-L., and Jordan, M. I.Ml-loo: Detecting adversarial examples with feature attribution. In AAAI(2020), pp. 6639–6647.
[48]Zhang, H., Li, Z., Li, G., Ma, L., Liu, Y., and Jin, Z.Generating adversarial examples for holding robustness of source code processing models. In Proceedings of the AAAI Conference on Artificial Intelligence(2020), pp. 1169–1176.
[49]Zheng, T., Chen, C., and Ren, K.Distributionally adversarial attack. In Proceedings of the AAAI Conference on Artificial Intelligence(2019), vol. 33, pp. 2253–2260.
zh_TW
dc.identifier.doi (DOI) 10.6814/NCCU202101295en_US