Publications-Theses
Article View/Open
Publication Export
-
Google ScholarTM
NCCU Library
Citation Infomation
Related Publications in TAIR
題名 神經網路的可控制穩健性訓練研究
Controllable Robustness Training作者 胡育騏
Hu, Yu-Chi貢獻者 郁方
Yu, Fang
胡育騏
Hu, Yu-Chi關鍵詞 抽象解釋
對抗訓練
邏輯規則
模型穩健性
Abstract interpretation
Adversarial training
Logic rule
Robustness日期 2023 上傳時間 1-Sep-2023 14:55:10 (UTC+8) 摘要 在大數據時代,神經網路技術取得了突破性進展。然而,神經網路的預測準確性和面對外界擾動和攻擊的穩健性成為一個重要的問題。使用對抗樣本或抽象解釋來訓練神經網路提高模型的穩健性可能會導致模型對原始任務的準確性和訓練性能降低。為了在準確性和穩健性之間達到平衡,我們提出了一種可控制的穩健性訓練方法,通過在對抗訓練過程中引入規則來控制神經網路模型。我們將對抗訓練的損失視為對規則的損失,從而將穩健性訓練與原始任務的訓練過程分離。在測試時,透過調整規則的強度,可以平衡模型在學習規則和約束方面的準確性和穩健性。我們證明了控制穩健性訓練的貢獻,可以在神經網路的準確性和穩健性之間達到更好的平衡。
Neural network techniques allow for the developing of complex systems that are difficult for humans to implement. However, training these networks for robustness using adversarial examples or abstraction interpretation can reduce precision and training performance on the original task prediction. To balance the trade-off between accuracy and robustness, we propose controllable robustness training, where we control neural network models with rule representations in the adversarial training process. The loss on adversarial training can then be considered a loss on the rule, thus separating the robustness training from the original task process. Rule strength can be adjusted at a testing time on its loss ratio, which balances precision and robustness in how the model learns rules and constraints. We demonstrate that controlling the contribution of robustness training achieves a better balance of good performance in both the accuracy and robustness of neural networks.參考文獻 [1] H.-J. Yoo, “Deep convolution neural networks in computer vision: a review,” IEIE Transactions on Smart Processing and Computing, vol. 4, no. 1, pp. 35–43, 2015.[2] L. Deng, G. Hinton, and B. Kingsbury, “New types of deep neural network learning for speech recognition and related applications: An overview,” in 2013 IEEE international conference on acoustics, speech and signal processing. IEEE, 2013, pp.8599–8603.[3] J. Zhang, C. Zong, et al., “Deep neural networks in machine translation: An overview.” IEEE Intell. Syst., vol. 30, no. 5, pp. 16–25, 2015.[4] A. Braylan, M. Hollenbeck, E. Meyerson, and R. Miikkulainen, “Reuse of neural modules for general video game playing,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 30, no. 1, 2016.[5] W. Zhou, J. S. Berrio, S. Worrall, and E. Nebot, “Automated evaluation of semantic segmentation robustness for autonomous driving,” IEEE Transactions on Intelligent Transportation Systems, vol. 21, no. 5, pp. 1951–1963, 2019.[6] F. Tramer and D. Boneh, “Adversarial training and robustness for multiple perturbations,” Advances in neural information processing systems, vol. 32, 2019.[7] S. Zheng, Y. Song, T. Leung, and I. Goodfellow, “Improving the robustness of deep neural networks via stability training,” in Proceedings of the ieee conference on computer vision and pattern recognition, 2016, pp. 4480–4488.[8] D. Zügner and S. Günnemann, “Certifiable robustness and robust training for graph convolutional networks,” in Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 2019, pp. 246–256.[9] C. Xie, M. Tan, B. Gong, J. Wang, A. L. Yuille, and Q. V. Le, “Adversarial examples improve image recognition,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 819–828.[10] A. Aldahdooh, W. Hamidouche, S. A. Fezza, and O. Déforges, “Adversarial example detection for dnn models: A review and experimental comparison,” Artificial Intelligence Review, pp. 1–60, 2022.[11] A. Chakraborty, M. Alam, V. Dey, A. Chattopadhyay, and D. Mukhopadhyay, “Adversarial attacks and defences: A survey,” arXiv preprint arXiv:1810.00069, 2018.[12] K. Han, B. Xia, and Y. Li, “2: adversarial domain adaptation to defense with adversarial perturbation removal,” Pattern Recognition, vol. 122, p. 108303, 2022.[13] S. Abramsky and C. Hankin, “An introduction to abstract interpretation,” in Abstract Interpretation of declarative languages, vol. 1. Ellis Horwood London, 1987, pp.63–102.[14] K. Ghorbal, E. Goubault, and S. Putot, “The zonotope abstract domain taylor1+,” in International conference on computer aided verification. Springer, 2009, pp.627–633.[15] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778.[16] S. Seo, S. Arik, J. Yoon, X. Zhang, K. Sohn, and T. Pfister, “Controlling neural networks with rule representations,” Advances in Neural Information Processing Systems, vol. 34, pp. 11 196–11 207, 2021.[17] B. Zhang, T. Cai, Z. Lu, D. He, and L. Wang, “Towards certifying l-infinity robustness using neural networks with l-inf-dist neurons,” in International Conference on Machine Learning. PMLR, 2021, pp. 12 368–12 379.[18] K. Pei, Y. Cao, J. Yang, and S. Jana, “Deepxplore: Automated whitebox testing of deep learning systems,” in proceedings of the 26th Symposium on Operating SystemsPrinciples, 2017, pp. 1–18.[19] T. Asano, S. Bitou, M. Motoki, and N. Usui, “In-place algorithm for image rotation,” in International Symposium on Algorithms and Computation. Springer, 2007, pp.704–715.[20] A. Chakraborty, M. Alam, V. Dey, A. Chattopadhyay, and D. Mukhopadhyay, “A survey on adversarial attacks and defences,” CAAI Transactions on Intelligence Technology, vol. 6, no. 1, pp. 25–45, 2021.[21] P. Cousot and R. Cousot, “Abstract interpretation frameworks,” Journal of logic and computation, vol. 2, no. 4, pp. 511–547, 1992.[22] Y. Dong, F. Liao, T. Pang, H. Su, J. Zhu, X. Hu, and J. Li, “Boosting adversarial attacks with momentum,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 9185–9193.[23] S. Park and J. So, “On the effectiveness of adversarial training in defending against adversarial example attacks for image classification,” Applied Sciences, vol. 10, no. 22, p. 8079, 2020.[24] K. Ren, T. Zheng, Z. Qin, and X. Liu, “Adversarial attacks and defenses in deep learning,” Engineering, vol. 6, no. 3, pp. 346–360, 2020.[25] C. Si, Z. Zhang, F. Qi, Z. Liu, Y. Wang, Q. Liu, and M. Sun, “Better robustness by more coverage: Adversarial and mixup data augmentation for robust finetuning,” in Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, 2021, pp. 1569–1576.[26] V. Fischer, M. C. Kumar, J. H. Metzen, and T. Brox, “Adversarial examples for semantic image segmentation,” arXiv preprint arXiv:1703.01101, 2017.[27] H. Kwon and S. Lee, “Textual adversarial training of machine learning model for resistance to adversarial examples,” Security and Communication Networks, vol. 2022, 2022.[28] H. Xu, Y. Ma, H.-C. Liu, D. Deb, H. Liu, J.-L. Tang, and A. K. Jain, “Adversarial attacks and defenses in images, graphs and text: A review,” International Journal of Automation and Computing, vol. 17, no. 2, pp. 151–178, 2020.[29] W. Xu, D. Evans, and Y. Qi, “Feature squeezing: Detecting adversarial examples in deep neural networks,” arXiv preprint arXiv:1704.01155, 2017.[30] T. Bai, J. Luo, J. Zhao, B. Wen, and Q. Wang, “Recent advances in adversarial training for adversarial robustness,” arXiv preprint arXiv:2102.01356, 2021.[31] N. Papernot, P. McDaniel, X. Wu, S. Jha, and A. Swami, “Distillation as a defense to adversarial perturbations against deep neural networks,” in 2016 IEEE symposium on security and privacy (SP). IEEE, 2016, pp. 582–597.[32] A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu, “Towards deep learning models resistant to adversarial attacks,” arXiv preprint arXiv:1706.06083, 2017.[33] F. Tramèr, A. Kurakin, N. Papernot, I. Goodfellow, D. Boneh, and P. McDaniel, “Ensemble adversarial training: Attacks and defenses,” arXiv preprint arXiv:1705.07204, 2017.[34] P. Cousot and R. Cousot, “Abstract interpretation: a unified lattice model for static analysis of programs by construction or approximation of fixpoints,” in Proceedings of the 4th ACM SIGACT-SIGPLAN symposium on Principles of programming languages, 1977, pp. 238–252.[35] J. Bertrane, P. Cousot, R. Cousot, J. Feret, L. Mauborgne, A. Miné, X. Rival, et al., “Static analysis and verification of aerospace software by abstract interpretation,” Foundations and Trends® in Programming Languages, vol. 2, no. 2-3, pp. 71–190, 2015.[36] X. Rival and K. Yi, Introduction to static analysis: an abstract interpretation perspective. Mit Press, 2020.[37] P. Cousot and R. Cousot, “Basic concepts of abstract interpretation,” in Building the Information Society. Springer, 2004, pp. 359–366.[38] L. Pulina and A. Tacchella, “An abstraction-refinement approach to verification of artificial neural networks,” in International Conference on Computer Aided Verification. Springer, 2010, pp. 243–257.[39] T. Gehr, M. Mirman, D. Drachsler-Cohen, P. Tsankov, S. Chaudhuri, and M. Vechev, “Ai2: Safety and robustness certification of neural networks with abstract interpretation,” in 2018 IEEE symposium on security and privacy (SP). IEEE, 2018, pp. 3–18.[40] G. Singh, T. Gehr, M. Mirman, M. Püschel, and M. Vechev, “Fast and effective robustness certification,” Advances in neural information processing systems, vol. 31, 2018.[41] M. Mirman, T. Gehr, and M. Vechev, “Differentiable abstract interpretation for provably robust neural networks,” in International Conference on Machine Learning. PMLR, 2018, pp. 3578–3586.[42] G. Singh, T. Gehr, M. Püschel, and M. Vechev, “An abstract domain for certifying neural networks,” Proceedings of the ACM on Programming Languages, vol. 3, no. POPL, pp. 1–30, 2019.[43] S. Kabir, R. U. Islam, M. S. Hossain, and K. Andersson, “An integrated approach of belief rule base and deep learning to predict air pollution,” Sensors, vol. 20, no. 7, p. 1956, 2020.[44] L. Chong, M. M. Abbas, A. M. Flintsch, and B. Higgs, “A rule-based neural network approach to model driver naturalistic behavior in traffic,” Transportation Research Part C: Emerging Technologies, vol. 32, pp. 207–223, 2013.[45] Z. Hu, X. Ma, Z. Liu, E. Hovy, and E. Xing, “Harnessing deep neural networks with logic rules,” arXiv preprint arXiv:1603.06318, 2016.[46] I. Harmon, S. Marconi, B. Weinstein, Y. Bai, D. Z. Wang, E. P. White, and S. Bohlman, “Improving rare tree species classification using domain knowledge,” 2022.[47] M. Kukar, I. Kononenko, et al., “Cost-sensitive learning with neural networks.” in ECAI, vol. 15, no. 27. Citeseer, 1998, pp. 88–94.[48] M. R. Hassan, B. Nath, and M. Kirley, “A fusion model of hmm, ann and ga for stock market forecasting,” Expert systems with Applications, vol. 33, no. 1, pp. 171–180, 2007.[49] E. Haber and M. Holtzman Gazit, “Model fusion and joint inversion,” Surveys in Geophysics, vol. 34, pp. 675–695, 2013.[50] P. Deepan and L. Sudha, “Fusion of deep learning models for improving classification accuracy of remote sensing images,” Journal of Mechanics of Continua and Mathematical Sciences, vol. 14, no. 1, pp. 189–201, 2019.[51] L. Breiman, “Bagging predictors,” Machine learning, vol. 24, pp. 123–140, 1996.[52] D. H. Wolpert, “Stacked generalization,” Neural networks, vol. 5, no. 2, pp. 241–259, 1992.[53] Y. Cao, Y. Lin, S. Ning, H. Pi, J. Zhang, and J. Hu, “Gan-based fusion adversarial training,” in Knowledge Science, Engineering and Management: 15th International Conference, KSEM 2022, Singapore, August 6–8, 2022, Proceedings, Part III.Springer, 2022, pp. 51–64.[54] I. J. Goodfellow, J. Shlens, and C. Szegedy, “Explaining and harnessing adversarial examples,” arXiv preprint arXiv:1412.6572, 2014.[55] M. Andriushchenko, F. Croce, N. Flammarion, and M. Hein, “Square attack: a query-efficient black-box adversarial attack via random search,” in Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XXIII. Springer, 2020, pp. 484-501.[56] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proceedings of the IEEE, vol. 86, no. 11, pp. 2278–2324, 1998.[57] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “Imagenet: A large-scale hierarchical image database,” in 2009 IEEE conference on computer vision and pattern recognition. Ieee, 2009, pp. 248-255. 描述 碩士
國立政治大學
資訊管理學系
110356042資料來源 http://thesis.lib.nccu.edu.tw/record/#G0110356042 資料類型 thesis dc.contributor.advisor 郁方 zh_TW dc.contributor.advisor Yu, Fang en_US dc.contributor.author (Authors) 胡育騏 zh_TW dc.contributor.author (Authors) Hu, Yu-Chi en_US dc.creator (作者) 胡育騏 zh_TW dc.creator (作者) Hu, Yu-Chi en_US dc.date (日期) 2023 en_US dc.date.accessioned 1-Sep-2023 14:55:10 (UTC+8) - dc.date.available 1-Sep-2023 14:55:10 (UTC+8) - dc.date.issued (上傳時間) 1-Sep-2023 14:55:10 (UTC+8) - dc.identifier (Other Identifiers) G0110356042 en_US dc.identifier.uri (URI) http://nccur.lib.nccu.edu.tw/handle/140.119/146895 - dc.description (描述) 碩士 zh_TW dc.description (描述) 國立政治大學 zh_TW dc.description (描述) 資訊管理學系 zh_TW dc.description (描述) 110356042 zh_TW dc.description.abstract (摘要) 在大數據時代,神經網路技術取得了突破性進展。然而,神經網路的預測準確性和面對外界擾動和攻擊的穩健性成為一個重要的問題。使用對抗樣本或抽象解釋來訓練神經網路提高模型的穩健性可能會導致模型對原始任務的準確性和訓練性能降低。為了在準確性和穩健性之間達到平衡,我們提出了一種可控制的穩健性訓練方法,通過在對抗訓練過程中引入規則來控制神經網路模型。我們將對抗訓練的損失視為對規則的損失,從而將穩健性訓練與原始任務的訓練過程分離。在測試時,透過調整規則的強度,可以平衡模型在學習規則和約束方面的準確性和穩健性。我們證明了控制穩健性訓練的貢獻,可以在神經網路的準確性和穩健性之間達到更好的平衡。 zh_TW dc.description.abstract (摘要) Neural network techniques allow for the developing of complex systems that are difficult for humans to implement. However, training these networks for robustness using adversarial examples or abstraction interpretation can reduce precision and training performance on the original task prediction. To balance the trade-off between accuracy and robustness, we propose controllable robustness training, where we control neural network models with rule representations in the adversarial training process. The loss on adversarial training can then be considered a loss on the rule, thus separating the robustness training from the original task process. Rule strength can be adjusted at a testing time on its loss ratio, which balances precision and robustness in how the model learns rules and constraints. We demonstrate that controlling the contribution of robustness training achieves a better balance of good performance in both the accuracy and robustness of neural networks. en_US dc.description.tableofcontents 摘要 iAbstract iiContents iiiList of Figures vList of Tables vii1 Introduction 12 Related Work 52.1 Adversarial training on neural network 52.2 Abstract interpretation 62.3 Rule combination model 72.4 Model fusion 83 Methodology 103.1 Overview 103.2 Generate adversarial examples for adversarial training 143.3 Generate perturbed data for abstract interpretation 173.3.1 Abstraction process 193.3.2 Abstract interpretation on a neural network 203.3.3 Abstract interpretation for image classification 213.4 Controllable robustness training 264 Experiments 344.1 Dataset and Performance Matrix 344.2 Experiment settings 344.3 Results of Rule for AT 354.4 Results of Rule for AI and Rule for AT+AI 375 Discussion 466 Conclusions 54Reference 55 zh_TW dc.format.extent 4041611 bytes - dc.format.mimetype application/pdf - dc.source.uri (資料來源) http://thesis.lib.nccu.edu.tw/record/#G0110356042 en_US dc.subject (關鍵詞) 抽象解釋 zh_TW dc.subject (關鍵詞) 對抗訓練 zh_TW dc.subject (關鍵詞) 邏輯規則 zh_TW dc.subject (關鍵詞) 模型穩健性 zh_TW dc.subject (關鍵詞) Abstract interpretation en_US dc.subject (關鍵詞) Adversarial training en_US dc.subject (關鍵詞) Logic rule en_US dc.subject (關鍵詞) Robustness en_US dc.title (題名) 神經網路的可控制穩健性訓練研究 zh_TW dc.title (題名) Controllable Robustness Training en_US dc.type (資料類型) thesis en_US dc.relation.reference (參考文獻) [1] H.-J. Yoo, “Deep convolution neural networks in computer vision: a review,” IEIE Transactions on Smart Processing and Computing, vol. 4, no. 1, pp. 35–43, 2015.[2] L. Deng, G. Hinton, and B. Kingsbury, “New types of deep neural network learning for speech recognition and related applications: An overview,” in 2013 IEEE international conference on acoustics, speech and signal processing. IEEE, 2013, pp.8599–8603.[3] J. Zhang, C. Zong, et al., “Deep neural networks in machine translation: An overview.” IEEE Intell. Syst., vol. 30, no. 5, pp. 16–25, 2015.[4] A. Braylan, M. Hollenbeck, E. Meyerson, and R. Miikkulainen, “Reuse of neural modules for general video game playing,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 30, no. 1, 2016.[5] W. Zhou, J. S. Berrio, S. Worrall, and E. Nebot, “Automated evaluation of semantic segmentation robustness for autonomous driving,” IEEE Transactions on Intelligent Transportation Systems, vol. 21, no. 5, pp. 1951–1963, 2019.[6] F. Tramer and D. Boneh, “Adversarial training and robustness for multiple perturbations,” Advances in neural information processing systems, vol. 32, 2019.[7] S. Zheng, Y. Song, T. Leung, and I. Goodfellow, “Improving the robustness of deep neural networks via stability training,” in Proceedings of the ieee conference on computer vision and pattern recognition, 2016, pp. 4480–4488.[8] D. Zügner and S. Günnemann, “Certifiable robustness and robust training for graph convolutional networks,” in Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 2019, pp. 246–256.[9] C. Xie, M. Tan, B. Gong, J. Wang, A. L. Yuille, and Q. V. Le, “Adversarial examples improve image recognition,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 819–828.[10] A. Aldahdooh, W. Hamidouche, S. A. Fezza, and O. Déforges, “Adversarial example detection for dnn models: A review and experimental comparison,” Artificial Intelligence Review, pp. 1–60, 2022.[11] A. Chakraborty, M. Alam, V. Dey, A. Chattopadhyay, and D. Mukhopadhyay, “Adversarial attacks and defences: A survey,” arXiv preprint arXiv:1810.00069, 2018.[12] K. Han, B. Xia, and Y. Li, “2: adversarial domain adaptation to defense with adversarial perturbation removal,” Pattern Recognition, vol. 122, p. 108303, 2022.[13] S. Abramsky and C. Hankin, “An introduction to abstract interpretation,” in Abstract Interpretation of declarative languages, vol. 1. Ellis Horwood London, 1987, pp.63–102.[14] K. Ghorbal, E. Goubault, and S. Putot, “The zonotope abstract domain taylor1+,” in International conference on computer aided verification. Springer, 2009, pp.627–633.[15] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778.[16] S. Seo, S. Arik, J. Yoon, X. Zhang, K. Sohn, and T. Pfister, “Controlling neural networks with rule representations,” Advances in Neural Information Processing Systems, vol. 34, pp. 11 196–11 207, 2021.[17] B. Zhang, T. Cai, Z. Lu, D. He, and L. Wang, “Towards certifying l-infinity robustness using neural networks with l-inf-dist neurons,” in International Conference on Machine Learning. PMLR, 2021, pp. 12 368–12 379.[18] K. Pei, Y. Cao, J. Yang, and S. Jana, “Deepxplore: Automated whitebox testing of deep learning systems,” in proceedings of the 26th Symposium on Operating SystemsPrinciples, 2017, pp. 1–18.[19] T. Asano, S. Bitou, M. Motoki, and N. Usui, “In-place algorithm for image rotation,” in International Symposium on Algorithms and Computation. Springer, 2007, pp.704–715.[20] A. Chakraborty, M. Alam, V. Dey, A. Chattopadhyay, and D. Mukhopadhyay, “A survey on adversarial attacks and defences,” CAAI Transactions on Intelligence Technology, vol. 6, no. 1, pp. 25–45, 2021.[21] P. Cousot and R. Cousot, “Abstract interpretation frameworks,” Journal of logic and computation, vol. 2, no. 4, pp. 511–547, 1992.[22] Y. Dong, F. Liao, T. Pang, H. Su, J. Zhu, X. Hu, and J. Li, “Boosting adversarial attacks with momentum,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 9185–9193.[23] S. Park and J. So, “On the effectiveness of adversarial training in defending against adversarial example attacks for image classification,” Applied Sciences, vol. 10, no. 22, p. 8079, 2020.[24] K. Ren, T. Zheng, Z. Qin, and X. Liu, “Adversarial attacks and defenses in deep learning,” Engineering, vol. 6, no. 3, pp. 346–360, 2020.[25] C. Si, Z. Zhang, F. Qi, Z. Liu, Y. Wang, Q. Liu, and M. Sun, “Better robustness by more coverage: Adversarial and mixup data augmentation for robust finetuning,” in Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, 2021, pp. 1569–1576.[26] V. Fischer, M. C. Kumar, J. H. Metzen, and T. Brox, “Adversarial examples for semantic image segmentation,” arXiv preprint arXiv:1703.01101, 2017.[27] H. Kwon and S. Lee, “Textual adversarial training of machine learning model for resistance to adversarial examples,” Security and Communication Networks, vol. 2022, 2022.[28] H. Xu, Y. Ma, H.-C. Liu, D. Deb, H. Liu, J.-L. Tang, and A. K. Jain, “Adversarial attacks and defenses in images, graphs and text: A review,” International Journal of Automation and Computing, vol. 17, no. 2, pp. 151–178, 2020.[29] W. Xu, D. Evans, and Y. Qi, “Feature squeezing: Detecting adversarial examples in deep neural networks,” arXiv preprint arXiv:1704.01155, 2017.[30] T. Bai, J. Luo, J. Zhao, B. Wen, and Q. Wang, “Recent advances in adversarial training for adversarial robustness,” arXiv preprint arXiv:2102.01356, 2021.[31] N. Papernot, P. McDaniel, X. Wu, S. Jha, and A. Swami, “Distillation as a defense to adversarial perturbations against deep neural networks,” in 2016 IEEE symposium on security and privacy (SP). IEEE, 2016, pp. 582–597.[32] A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu, “Towards deep learning models resistant to adversarial attacks,” arXiv preprint arXiv:1706.06083, 2017.[33] F. Tramèr, A. Kurakin, N. Papernot, I. Goodfellow, D. Boneh, and P. McDaniel, “Ensemble adversarial training: Attacks and defenses,” arXiv preprint arXiv:1705.07204, 2017.[34] P. Cousot and R. Cousot, “Abstract interpretation: a unified lattice model for static analysis of programs by construction or approximation of fixpoints,” in Proceedings of the 4th ACM SIGACT-SIGPLAN symposium on Principles of programming languages, 1977, pp. 238–252.[35] J. Bertrane, P. Cousot, R. Cousot, J. Feret, L. Mauborgne, A. Miné, X. Rival, et al., “Static analysis and verification of aerospace software by abstract interpretation,” Foundations and Trends® in Programming Languages, vol. 2, no. 2-3, pp. 71–190, 2015.[36] X. Rival and K. Yi, Introduction to static analysis: an abstract interpretation perspective. Mit Press, 2020.[37] P. Cousot and R. Cousot, “Basic concepts of abstract interpretation,” in Building the Information Society. Springer, 2004, pp. 359–366.[38] L. Pulina and A. Tacchella, “An abstraction-refinement approach to verification of artificial neural networks,” in International Conference on Computer Aided Verification. Springer, 2010, pp. 243–257.[39] T. Gehr, M. Mirman, D. Drachsler-Cohen, P. Tsankov, S. Chaudhuri, and M. Vechev, “Ai2: Safety and robustness certification of neural networks with abstract interpretation,” in 2018 IEEE symposium on security and privacy (SP). IEEE, 2018, pp. 3–18.[40] G. Singh, T. Gehr, M. Mirman, M. Püschel, and M. Vechev, “Fast and effective robustness certification,” Advances in neural information processing systems, vol. 31, 2018.[41] M. Mirman, T. Gehr, and M. Vechev, “Differentiable abstract interpretation for provably robust neural networks,” in International Conference on Machine Learning. PMLR, 2018, pp. 3578–3586.[42] G. Singh, T. Gehr, M. Püschel, and M. Vechev, “An abstract domain for certifying neural networks,” Proceedings of the ACM on Programming Languages, vol. 3, no. POPL, pp. 1–30, 2019.[43] S. Kabir, R. U. Islam, M. S. Hossain, and K. Andersson, “An integrated approach of belief rule base and deep learning to predict air pollution,” Sensors, vol. 20, no. 7, p. 1956, 2020.[44] L. Chong, M. M. Abbas, A. M. Flintsch, and B. Higgs, “A rule-based neural network approach to model driver naturalistic behavior in traffic,” Transportation Research Part C: Emerging Technologies, vol. 32, pp. 207–223, 2013.[45] Z. Hu, X. Ma, Z. Liu, E. Hovy, and E. Xing, “Harnessing deep neural networks with logic rules,” arXiv preprint arXiv:1603.06318, 2016.[46] I. Harmon, S. Marconi, B. Weinstein, Y. Bai, D. Z. Wang, E. P. White, and S. Bohlman, “Improving rare tree species classification using domain knowledge,” 2022.[47] M. Kukar, I. Kononenko, et al., “Cost-sensitive learning with neural networks.” in ECAI, vol. 15, no. 27. Citeseer, 1998, pp. 88–94.[48] M. R. Hassan, B. Nath, and M. Kirley, “A fusion model of hmm, ann and ga for stock market forecasting,” Expert systems with Applications, vol. 33, no. 1, pp. 171–180, 2007.[49] E. Haber and M. Holtzman Gazit, “Model fusion and joint inversion,” Surveys in Geophysics, vol. 34, pp. 675–695, 2013.[50] P. Deepan and L. Sudha, “Fusion of deep learning models for improving classification accuracy of remote sensing images,” Journal of Mechanics of Continua and Mathematical Sciences, vol. 14, no. 1, pp. 189–201, 2019.[51] L. Breiman, “Bagging predictors,” Machine learning, vol. 24, pp. 123–140, 1996.[52] D. H. Wolpert, “Stacked generalization,” Neural networks, vol. 5, no. 2, pp. 241–259, 1992.[53] Y. Cao, Y. Lin, S. Ning, H. Pi, J. Zhang, and J. Hu, “Gan-based fusion adversarial training,” in Knowledge Science, Engineering and Management: 15th International Conference, KSEM 2022, Singapore, August 6–8, 2022, Proceedings, Part III.Springer, 2022, pp. 51–64.[54] I. J. Goodfellow, J. Shlens, and C. Szegedy, “Explaining and harnessing adversarial examples,” arXiv preprint arXiv:1412.6572, 2014.[55] M. Andriushchenko, F. Croce, N. Flammarion, and M. Hein, “Square attack: a query-efficient black-box adversarial attack via random search,” in Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XXIII. Springer, 2020, pp. 484-501.[56] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proceedings of the IEEE, vol. 86, no. 11, pp. 2278–2324, 1998.[57] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “Imagenet: A large-scale hierarchical image database,” in 2009 IEEE conference on computer vision and pattern recognition. Ieee, 2009, pp. 248-255. zh_TW