Publications-Theses

Article View/Open

Publication Export

Google ScholarTM

NCCU Library

Citation Infomation

Related Publications in TAIR

題名 原像分析結合視覺化分析作為探究XAI之可行性研究
The Study of Preimage Analysis With Visual Analytics For XAI
作者 古聖釗
Gu, Sheng-Zhao
貢獻者 蔡瑞煌<br>郁方
Tsaih, Rua-Huan<br>Yu, Fang
古聖釗
Gu, Sheng-Zhao
關鍵詞 可解釋人工智慧
單隱層前饋神經網路
整流線性單元
原像分析
視覺化分析
Explainable Artificial Intelligence
1-hidden layer feed-forward neural network
ReLU
Preimage analysis
Visual analytics
日期 2021
上傳時間 1-Jun-2021 14:55:11 (UTC+8)
摘要 本研究計畫探索可解釋人工智慧(Explainable Artificial Intelligence, XAI)議題。為了解決類神經網路(Artificial Neural Networks, ANN)的黑箱挑戰,本研究計畫將探討原像分析(Preimage analysis)的數學分析工具和視覺化分析(Visual analytics)是否可以用來打開黑箱。本研究將著眼於具有m個輸入節點,p個使用整流線性單元(Rectified Linear Unit, ReLU)激發函數的隱藏節點和一個使用線性激發函數的輸出節點的單隱層前饋神經網路(1-hidden Layer Feed-forward Neural network, 1HLNN)。近年來,ReLU在深度學習(Deep Learning)應用中被廣泛採用的原因是ReLU具有以下優點:(1)ReLU的計算成本低廉,因為它沒有復雜的數學運算,因此運算量較小,而訓練和執行的時間也較小;(2)線性是指輸入總合值變大時,該函數沒有“飽和”區域;(3)消失梯度問題可更容易地解決。還有,與深層神經網路(Deep Neural Networks, DNN)相比,1HLNN比較容易分析,比較容易打開其黑箱;而其推導得到的XAI結果可能可以擴展到DNN。因此,本研究計畫的重點之一乃在於探索具有ReLU激發函數的單隱層前饋神經網路的可解釋性。
This research explores Explainable Artificial Intelligence (XAI) issues. In order to solve the black box of Artificial Neural Networks (ANN), this research project will explore whether the mathematical analysis tools of Preimage analysis and Visual analytics can be used. We will focus on 1- hidden Layer Feed-forward Neural Network(1HLNN) with m input nodes, p hidden nodes using Rectified Linear Unit (ReLU) excitation functions. In recent years, ReLU has been widely used in deep learning applications because ReLU has the following advantages: (1) ReLU has low computational cost; (2) Linearity means that when the total input value becomes larger, the function does not have a &quot;saturated&quot; area; (3) The vanishing gradient problem can be solved more easily. Also, compared with Deep Neural Networks (DNN), 1HLNN is easier to analyze and open its black box; and it may be extended to DNN. Therefore, one of the points of this research is to explore the interpretability of 1HLNN with ReLU.
參考文獻 Arrieta, A. B., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., ... & Herrera, F. (2020). Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion, 58, 82-115.
Belciug, S., & Gorunescu, F. (2018). Learning a single-hidden layer feedforward neural network using a rank correlation-based strategy with application to high dimensional gene expression and proteomic spectra datasets in cancer detection. Journal of biomedical informatics, 83, 159-166.
Buchanan, B. G. (2005). A (very) brief history of artificial intelligence. Ai Magazine, 26(4), 53-53.
Cao, J., & Lin, Z. (2015). Extreme learning machines on high dimensional and large data applications: a survey. Mathematical Problems in Engineering, 2015.
Chen, H., Lundberg, S., & Lee, S. I. (2021). Explaining models by propagating Shapley values of local components. In Explainable AI in Healthcare and Medicine (pp. 261-270). Springer, Cham.
Choo, J., & Liu, S. (2018). Visual analytics for explainable deep learning. IEEE computer graphics and applications, 38(4), 84-92.
Deng, J., Li, K., & Irwin, G. W. (2011). Fast automatic two-stage nonlinear model identification based on the extreme learning machine. Neurocomputing, 74(16), 2422-2429.
Glorot, X., Bordes, A., & Bengio, Y. (2011, June). Deep sparse rectifier neural networks. In Proceedings of the fourteenth international conference on artificial intelligence and statistics (pp. 315-323). JMLR Workshop and Conference Proceedings.
Goodfellow, I., Bengio, Y., Courville, A., & Bengio, Y. (2016). Deep learning (Vol. 1, No. 2). Cambridge: MIT press.
Gunning, D. (2017). Explainable artificial intelligence (xai). Defense Advanced Research Projects Agency (DARPA), nd Web, 2(2).
He, K., Zhang, X., Ren, S., & Sun, J. (2015). Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In Proceedings of the IEEE international conference on computer vision (pp. 1026-1034).
High-Level Expert Group on AI (2019). Ethics guidelines for trustworthy AI (Report). European Commission.
Huysmans, J., Dejaeger, K., Mues, C., Vanthienen, J., & Baesens, B. (2011). An empirical evaluation of the comprehensibility of decision table, tree and rule based predictive models. Decision Support Systems, 51(1), 141-154.
Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems, 25, 1097-1105.
Lifschitz, V. (2011). John McCarthy (1927–2011). Nature, 480(7375), 40-40.
Lundberg, S., & Lee, S. I. (2017). A unified approach to interpreting model predictions. arXiv preprint arXiv:1705.07874.
Maas, A. L., Hannun, A. Y., & Ng, A. Y. (2013, June). Rectifier nonlinearities improve neural network acoustic models. In Proc. icml (Vol. 30, No. 1, p. 3).
Nielsen, M. A. (2015). Neural networks and deep learning (Vol. 25). San Francisco, CA: Determination press.
Oke, S. A. (2008). A literature review on artificial intelligence. International journal of information and management sciences, 19(4), 535-570.
Ribeiro, M. T., Singh, S., & Guestrin, C. (2016, August). " Why should i trust you?" Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining (pp. 1135-1144).
Rudin, C. (2018). Please stop explaining black box models for high stakes decisions. arXiv preprint arXiv:1811.10154, 1.
Shrikumar, A., Greenside, P., Shcherbina, A., & Kundaje, A. (2016). Not just a black box: Learning important features through propagating activation differences. arXiv preprint arXiv:1605.01713.
Subudhi, B., & Jena, D. (2011). Nonlinear system identification using memetic differential evolution trained neural networks. Neurocomputing, 74(10), 1696-1709.
Tsai, Y. H., Jheng, Y. J., & Tsaih, R. H. (2019, July). The Cramming, Softening and Integrating Learning Algorithm with Parametric ReLu Activation Function for Binary Input/Output Problems. In 2019 International Joint Conference on Neural Networks (IJCNN) (pp. 1-7). IEEE.
Tsaih, R. H., Wan, Y. W., & Huang, S. Y. (2008, June). The rule-extraction through the preimage analysis. In 2008 IEEE International Joint Conference on Neural Networks (IEEE World Congress on Computational Intelligence) (pp. 1488-1494). IEEE.
Tsaih, R. R. (1993). The softening learning procedure. Mathematical and computer modelling, 18(8), 61-64.
Tsaih, R. R. (1993, October). The softening learning procedure for the layered feedforward networks with multiple output nodes. In Proceedings of 1993 International Conference on Neural Networks (IJCNN-93-Nagoya, Japan) (Vol. 1, pp. 593-596). IEEE.
Tsaih, R. H. R. (1997). Reasoning neural networks. In Mathematics of Neural Networks (pp. 366-371). Springer, Boston, MA.
Tsaih, R. R. (1998). An explanation of reasoning neural networks. Mathematical and Computer Modelling, 28(2), 37-44.
Copeland, B. J. (Ed.). (2004). The essential turing. Clarendon Press.
Wang, F. Y., Zhang, J. J., Zheng, X., Wang, X., Yuan, Y., Dai, X., ... & Yang, L. (2016). Where does AlphaGo go: From church-turing thesis to AlphaGo thesis and beyond. IEEE/CAA Journal of Automatica Sinica, 3(2), 113-120.
Watanabe, E., & Shimizu, H. (1993, October). Algorithm for pruning hidden units in multilayered neural network for binary pattern classification problem. In Proceedings of 1993 International Conference on Neural Networks (IJCNN-93-Nagoya, Japan) (Vol. 1, pp. 327-330). IEEE.
描述 碩士
國立政治大學
資訊管理學系
107356043
資料來源 http://thesis.lib.nccu.edu.tw/record/#G0107356043
資料類型 thesis
dc.contributor.advisor 蔡瑞煌<br>郁方zh_TW
dc.contributor.advisor Tsaih, Rua-Huan<br>Yu, Fangen_US
dc.contributor.author (Authors) 古聖釗zh_TW
dc.contributor.author (Authors) Gu, Sheng-Zhaoen_US
dc.creator (作者) 古聖釗zh_TW
dc.creator (作者) Gu, Sheng-Zhaoen_US
dc.date (日期) 2021en_US
dc.date.accessioned 1-Jun-2021 14:55:11 (UTC+8)-
dc.date.available 1-Jun-2021 14:55:11 (UTC+8)-
dc.date.issued (上傳時間) 1-Jun-2021 14:55:11 (UTC+8)-
dc.identifier (Other Identifiers) G0107356043en_US
dc.identifier.uri (URI) http://nccur.lib.nccu.edu.tw/handle/140.119/135329-
dc.description (描述) 碩士zh_TW
dc.description (描述) 國立政治大學zh_TW
dc.description (描述) 資訊管理學系zh_TW
dc.description (描述) 107356043zh_TW
dc.description.abstract (摘要) 本研究計畫探索可解釋人工智慧(Explainable Artificial Intelligence, XAI)議題。為了解決類神經網路(Artificial Neural Networks, ANN)的黑箱挑戰,本研究計畫將探討原像分析(Preimage analysis)的數學分析工具和視覺化分析(Visual analytics)是否可以用來打開黑箱。本研究將著眼於具有m個輸入節點,p個使用整流線性單元(Rectified Linear Unit, ReLU)激發函數的隱藏節點和一個使用線性激發函數的輸出節點的單隱層前饋神經網路(1-hidden Layer Feed-forward Neural network, 1HLNN)。近年來,ReLU在深度學習(Deep Learning)應用中被廣泛採用的原因是ReLU具有以下優點:(1)ReLU的計算成本低廉,因為它沒有復雜的數學運算,因此運算量較小,而訓練和執行的時間也較小;(2)線性是指輸入總合值變大時,該函數沒有“飽和”區域;(3)消失梯度問題可更容易地解決。還有,與深層神經網路(Deep Neural Networks, DNN)相比,1HLNN比較容易分析,比較容易打開其黑箱;而其推導得到的XAI結果可能可以擴展到DNN。因此,本研究計畫的重點之一乃在於探索具有ReLU激發函數的單隱層前饋神經網路的可解釋性。zh_TW
dc.description.abstract (摘要) This research explores Explainable Artificial Intelligence (XAI) issues. In order to solve the black box of Artificial Neural Networks (ANN), this research project will explore whether the mathematical analysis tools of Preimage analysis and Visual analytics can be used. We will focus on 1- hidden Layer Feed-forward Neural Network(1HLNN) with m input nodes, p hidden nodes using Rectified Linear Unit (ReLU) excitation functions. In recent years, ReLU has been widely used in deep learning applications because ReLU has the following advantages: (1) ReLU has low computational cost; (2) Linearity means that when the total input value becomes larger, the function does not have a &quot;saturated&quot; area; (3) The vanishing gradient problem can be solved more easily. Also, compared with Deep Neural Networks (DNN), 1HLNN is easier to analyze and open its black box; and it may be extended to DNN. Therefore, one of the points of this research is to explore the interpretability of 1HLNN with ReLU.en_US
dc.description.tableofcontents 摘要 6
Abstract 7
第一章 緒論 8
第一節 研究背景與動機 8
第二節 研究目的 11
第二章 文獻回顧 12
第一節 可解釋的AI(Explainable Artificial Intelligence) 12
第二節 原像分析(Preimage analysis) 20
第三節 單隱層前饋神經網路 20
第三章 原像分析 23
第一節 原像分析於使用ReLU激發函數之1HLNN 23
第四章 視覺化 34
第一節 原像視覺化 34
第二節 視覺化分析 60
第五章 結論與討論 70
第一節 視覺化分析(隱藏節點增加) 70
第二節 結論 72
第三節 研究限制與未來展望 73
參考文獻 75
zh_TW
dc.format.extent 11549430 bytes-
dc.format.mimetype application/pdf-
dc.source.uri (資料來源) http://thesis.lib.nccu.edu.tw/record/#G0107356043en_US
dc.subject (關鍵詞) 可解釋人工智慧zh_TW
dc.subject (關鍵詞) 單隱層前饋神經網路zh_TW
dc.subject (關鍵詞) 整流線性單元zh_TW
dc.subject (關鍵詞) 原像分析zh_TW
dc.subject (關鍵詞) 視覺化分析zh_TW
dc.subject (關鍵詞) Explainable Artificial Intelligenceen_US
dc.subject (關鍵詞) 1-hidden layer feed-forward neural networken_US
dc.subject (關鍵詞) ReLUen_US
dc.subject (關鍵詞) Preimage analysisen_US
dc.subject (關鍵詞) Visual analyticsen_US
dc.title (題名) 原像分析結合視覺化分析作為探究XAI之可行性研究zh_TW
dc.title (題名) The Study of Preimage Analysis With Visual Analytics For XAIen_US
dc.type (資料類型) thesisen_US
dc.relation.reference (參考文獻) Arrieta, A. B., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., ... & Herrera, F. (2020). Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion, 58, 82-115.
Belciug, S., & Gorunescu, F. (2018). Learning a single-hidden layer feedforward neural network using a rank correlation-based strategy with application to high dimensional gene expression and proteomic spectra datasets in cancer detection. Journal of biomedical informatics, 83, 159-166.
Buchanan, B. G. (2005). A (very) brief history of artificial intelligence. Ai Magazine, 26(4), 53-53.
Cao, J., & Lin, Z. (2015). Extreme learning machines on high dimensional and large data applications: a survey. Mathematical Problems in Engineering, 2015.
Chen, H., Lundberg, S., & Lee, S. I. (2021). Explaining models by propagating Shapley values of local components. In Explainable AI in Healthcare and Medicine (pp. 261-270). Springer, Cham.
Choo, J., & Liu, S. (2018). Visual analytics for explainable deep learning. IEEE computer graphics and applications, 38(4), 84-92.
Deng, J., Li, K., & Irwin, G. W. (2011). Fast automatic two-stage nonlinear model identification based on the extreme learning machine. Neurocomputing, 74(16), 2422-2429.
Glorot, X., Bordes, A., & Bengio, Y. (2011, June). Deep sparse rectifier neural networks. In Proceedings of the fourteenth international conference on artificial intelligence and statistics (pp. 315-323). JMLR Workshop and Conference Proceedings.
Goodfellow, I., Bengio, Y., Courville, A., & Bengio, Y. (2016). Deep learning (Vol. 1, No. 2). Cambridge: MIT press.
Gunning, D. (2017). Explainable artificial intelligence (xai). Defense Advanced Research Projects Agency (DARPA), nd Web, 2(2).
He, K., Zhang, X., Ren, S., & Sun, J. (2015). Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In Proceedings of the IEEE international conference on computer vision (pp. 1026-1034).
High-Level Expert Group on AI (2019). Ethics guidelines for trustworthy AI (Report). European Commission.
Huysmans, J., Dejaeger, K., Mues, C., Vanthienen, J., & Baesens, B. (2011). An empirical evaluation of the comprehensibility of decision table, tree and rule based predictive models. Decision Support Systems, 51(1), 141-154.
Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems, 25, 1097-1105.
Lifschitz, V. (2011). John McCarthy (1927–2011). Nature, 480(7375), 40-40.
Lundberg, S., & Lee, S. I. (2017). A unified approach to interpreting model predictions. arXiv preprint arXiv:1705.07874.
Maas, A. L., Hannun, A. Y., & Ng, A. Y. (2013, June). Rectifier nonlinearities improve neural network acoustic models. In Proc. icml (Vol. 30, No. 1, p. 3).
Nielsen, M. A. (2015). Neural networks and deep learning (Vol. 25). San Francisco, CA: Determination press.
Oke, S. A. (2008). A literature review on artificial intelligence. International journal of information and management sciences, 19(4), 535-570.
Ribeiro, M. T., Singh, S., & Guestrin, C. (2016, August). " Why should i trust you?" Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining (pp. 1135-1144).
Rudin, C. (2018). Please stop explaining black box models for high stakes decisions. arXiv preprint arXiv:1811.10154, 1.
Shrikumar, A., Greenside, P., Shcherbina, A., & Kundaje, A. (2016). Not just a black box: Learning important features through propagating activation differences. arXiv preprint arXiv:1605.01713.
Subudhi, B., & Jena, D. (2011). Nonlinear system identification using memetic differential evolution trained neural networks. Neurocomputing, 74(10), 1696-1709.
Tsai, Y. H., Jheng, Y. J., & Tsaih, R. H. (2019, July). The Cramming, Softening and Integrating Learning Algorithm with Parametric ReLu Activation Function for Binary Input/Output Problems. In 2019 International Joint Conference on Neural Networks (IJCNN) (pp. 1-7). IEEE.
Tsaih, R. H., Wan, Y. W., & Huang, S. Y. (2008, June). The rule-extraction through the preimage analysis. In 2008 IEEE International Joint Conference on Neural Networks (IEEE World Congress on Computational Intelligence) (pp. 1488-1494). IEEE.
Tsaih, R. R. (1993). The softening learning procedure. Mathematical and computer modelling, 18(8), 61-64.
Tsaih, R. R. (1993, October). The softening learning procedure for the layered feedforward networks with multiple output nodes. In Proceedings of 1993 International Conference on Neural Networks (IJCNN-93-Nagoya, Japan) (Vol. 1, pp. 593-596). IEEE.
Tsaih, R. H. R. (1997). Reasoning neural networks. In Mathematics of Neural Networks (pp. 366-371). Springer, Boston, MA.
Tsaih, R. R. (1998). An explanation of reasoning neural networks. Mathematical and Computer Modelling, 28(2), 37-44.
Copeland, B. J. (Ed.). (2004). The essential turing. Clarendon Press.
Wang, F. Y., Zhang, J. J., Zheng, X., Wang, X., Yuan, Y., Dai, X., ... & Yang, L. (2016). Where does AlphaGo go: From church-turing thesis to AlphaGo thesis and beyond. IEEE/CAA Journal of Automatica Sinica, 3(2), 113-120.
Watanabe, E., & Shimizu, H. (1993, October). Algorithm for pruning hidden units in multilayered neural network for binary pattern classification problem. In Proceedings of 1993 International Conference on Neural Networks (IJCNN-93-Nagoya, Japan) (Vol. 1, pp. 327-330). IEEE.
zh_TW
dc.identifier.doi (DOI) 10.6814/NCCU202100462en_US