Publications-Theses

Article View/Open

Publication Export

Google ScholarTM

NCCU Library

Citation Infomation

Related Publications in TAIR

題名 可解釋性人工智慧的研究架構及書目計量學分析
Explainable Artificial Intelligence: Research Framework and A Bibliometric Analysis
作者 楊承祐
Yang, Cheng-Yu
貢獻者 梁定澎<br>彭志宏
Liang, Ting-Peng<br>Peng, Chih-Hung
楊承祐
Yang, Cheng-Yu
關鍵詞 可解釋性人工智慧
深度學習
黑盒模型
研究架構
書目計量學
Explainable Artificial Intelligence
Deep Learning
Black-box Model
Research Framework
Bibliometric Analysis
日期 2021
上傳時間 2-Mar-2021 14:19:34 (UTC+8)
摘要 近年來隨著人工智慧領域的發展,由於深度學習的黑盒模型(Black-box Model)造成預測結果難以理解,使得在技術、法律、經濟及社會等層面上造成人工智慧發展的瓶頸。因此能否從不透明的黑盒模型中找出可被解釋的決策關鍵成為一個至關重要且迫切的研究方向,即所謂的可解釋性人工智慧(eXplainable Artificial Intelligence, XAI)。
然而目前學術上對於可解釋性人工智慧的相關研究處於初步發展階段,較缺乏完整性的脈絡以及綜觀性的統整。因此本研究主要目的為針對可解釋性人工智慧的相關研究主題進行過去發表文獻的彙總與分析,整理目前研究的發展現況及釐清現存問題,並提出可供未來研究人員參考的研究架構。本研究透過Web of Science文獻資料庫平台蒐集學術上現有的可解釋性人工智慧相關文獻,並採用書目計量學(Bibliometric Analysis)的書目分析方法,搭配VOSViewer書目計量學輔助分析軟體,將文獻進行量化以及視覺化的分析,並彙整學術上重要的文獻。同時針對可解釋性人工智慧的相關技術及評估方法進行架構性的統整,提供技術層面的基本認識以促使相關研究發展。最後彙整目前可解釋性人工智慧研究的相關問題及發展限制,提供未來研究人員的發展方向。
Recently, Artificial Intelligence (AI) and deep learning are popular in predictive modeling and decision making, but the process of producing results are not transparent and sometimes hard to understand. This becomes a bottleneck for adopting artificial intelligence from technical, legal, economic, and social aspects. Hence, making AI decisions explainable from the opaque black-box model has become an important and imperative research direction, which is called eXplainable Artificial Intelligence (XAI). A number of papers related to XAI have been published in different areas, but the issue of explainability involves many different issues that make it hard to have a complete profile for researchers with interests in entering the area. The purpose of this research is to conduct a bibliometric analysis to provide a comprehensive overview of explainable artificial intelligence literature. Published literature are identified, sorted out, and clarified to build a research framework that can be used to guide researchers. Based on our findings, future research issues and constraints of the explainable artificial intelligence are identified. The findings of this research shed much light on understanding the current status and future directions of XAI.
參考文獻 [1] Arras, L., Horn, F., Montavon, G., Müller, K. R., & Samek, W. (2017). " What is relevant in a text document?": An interpretable machine learning approach. PloS one, 12(8), e0181142.
[2] Arrieta, A. B., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., et al. (2020). Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion, 58, 82-115.
[3] Bach, S., Binder, A., Montavon, G., Klauschen, F., Müller, K. R., & Samek, W. (2015). On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PloS one, 10(7), e0130140.
[4] Biran & C. Cotton. (2017). Explanation and justification in machine learning: A survey. In IJCAI 2017 Workshop on Explainable Artificial Intelligence (XAI), Vol. 8, No. 1, pp. 8-13
[5] Caruana, R., Lou, Y., Gehrke, J., Koch, P., Sturm, M., & Elhadad, N. (2015, August). Intelligible models for healthcare: Predicting pneumonia risk and hospital 30-day readmission. In Proceedings of the 21th ACM SIGKDD international conference on knowledge discovery and data mining (pp. 1721-1730).
[6] Christoph Molnar. (2019). Interpretable Machine Learning: A Guide for Making Black Box Models Explainable. Available at https://christophm.github.io/interpretable-ml-book/.
[7] Cobo, M. J., López‐Herrera, A. G., Herrera‐Viedma, E., & Herrera, F. (2012). SciMAT: A new science mapping analysis software tool. Journal of the American Society for Information Science and Technology, 63(8), 1609-1630.
[8] Defense Advanced Research Projects Agency [DARPA]. (2016). Explainable Artificial Intelligence (XAI) Program. Retrieved from: https://www.darpa.mil/attachments/DARPA-BAA-16-53.pdf
[9] Doran, D., Schulz, S., & Besold, T. R. (2017). What does explainable AI really mean? A new conceptualization of perspectives. arXiv preprint arXiv:1710.00794.
[10] Doshi-Velez, F., & Kim, B. (2017). Towards A Rigorous Science of Interpretable Machine Learning. arXiv Preprint arXiv:1702.08608v2.
[11] Friedman, Jerome H. (2001). Greedy function approximation: A gradient boosting machine. Annals of statistics, 29 (5), 1189-1232.
[12] Gall, R. (2018). Machine learning explainability vs interpretability: two concepts that could help restore trust in AI. KDnuggets. Kdnuggets. https://www. kdnuggets. com/2018/12/machine-learning-explainabilit y-interpretability-ai. html.
[13] Goodman, B., & Flaxman, S. (2017). European Union regulations on algorithmic decision-making and a “right to explanation”. AI magazine, 38(3), 50-57.
[14] He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770-778).
[15] International Joint Conference on Artificial Intelligence [IJCAI]. (2017). IJCAI 2017 Workshop on Explainable Artificial Intelligence (XAI). Retrieved from: http://home.earthlink.net/~dwaha/research/meetings/ijcai17-xai
[16] J. R. Quinlan. (1987) Simplifying decision trees. International journal of man-machine studies, 27 (3), 221-234
[17] Khan, K. S., Kunz, R., Kleijnen, J., & Antes, G. (2003). Five steps to conducting a systematic review. Journal of the Royal Society of Medicine, 96(3), 118-121. https://doi.org/10.1258/jrsm.96.3.118
[18] Kobsa, A. (1984). What is explained by AI models?. Communication & Cognition.
[19] Krauskopf, E. (2018). A bibiliometric analysis of the Journal of Infection and Public Health: 2008-2016. Journal of infection and public health, 11(2), 224-229.
[20] Lipton, Z.C. (2016). The mythos of model interpretability. Workshop on Human Interpretability in Machine Learning.
[21] McDermott, D., Waldrop, M. M., Chandrasekaran, B., McDermott, J., & Schank, R. (Kaiming). The Dark Ages of AI: A Panel Discussion at AAAI-84. AI Magazine, 6(3), 122.
[22] Miller, T. (2019). Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence, 267, 1-38.
[23] Peng, C. Y. J., Lee, K. L., & Ingersoll, G. M. (2002). An introduction to logistic regression analysis and reporting. The journal of educational research, 96(1), 3-14.
[24] Persson, O., Danell, R., & Schneider, J. W. (2009). How to use Bibexcel for various types of bibliometric analysis. Celebrating scholarly communication studies: A Festschrift for Olle Persson at his 60th Birthday, 5, 9-24.
[25] Pieters, W. (2011). Explanation and trust: what to tell the user in security and AI?. Ethics and information technology, 13(1), 53-64.
[26] Pritchard, A. (1969). Statistical bibliography or bibliometrics. Journal of documentation, 25(4), 348-349.
[27] Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). " Why should I trust you?" Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, 1135-1144.
[28] Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence, 1(5), 206-215..
[29] Samek, W., Wiegand, T., & Müller, K. R. (2017). Explainable artificial intelligence: Understanding, visualizing and interpreting deep learning models. arXiv preprint arXiv:1708.08296.
[30] Schildt, H. A. (2002). Sitkis: software for bibliometric data management and analysis. Helsinki: Institute of Strategy and International Business, 6, 1.
[31] Schuchmann, Sebastian. (2019). Analyzing the Prospect of an Approaching AI Winter. 10.13140/RG.2.2.10932.91524.
[32] Skirpan, M., & Yeh, T. (2017). Designing a moral compass for the future of computer vision using speculative analysis. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, 64-73.
[33] Stumpf, S., Rajaram, V., Li, L., Wong, W. K., Burnett, M., Dietterich, T., et al. (2009). Interacting meaningfully with machine learning systems: Three experiments. International Journal of Human-Computer Studies, 67(8), 639-662.
[34] Van Eck, N. J., & Waltman, L. (2010). Software survey: VOSviewer, a computer program for bibliometric mapping. Scientometrics, 84(2), 523-538.
[35] Van Eck, N.J., & Waltman, L. (2009). How to normalize cooccurrence data? An analysis of some well-known similarity measures. Journal of the American Society for Information Science and Technology, 60(8), 1635-1651
[36] Van Eck, N.J., & Waltman, L. (2020). VOSviewer Manual. Retrieved from: https://www.vosviewer.com/download/f-33t2.pdf
[37] Wang, Q. (2018). Distribution features and intellectual structures of digital humanities. Journal of Documentation.
[38] Xu, Z., & Yu, D. (2019). A Bibliometrics analysis on big data research (2009-2018). Journal of Data, Information and Management, 1(1), 3-15.
[39] Zupic, I., & Čater, T. (2015). Bibliometric methods in management and organization. Organizational Research Methods, 18(3), 429-472.
描述 碩士
國立政治大學
資訊管理學系
107356032
資料來源 http://thesis.lib.nccu.edu.tw/record/#G0107356032
資料類型 thesis
dc.contributor.advisor 梁定澎<br>彭志宏zh_TW
dc.contributor.advisor Liang, Ting-Peng<br>Peng, Chih-Hungen_US
dc.contributor.author (Authors) 楊承祐zh_TW
dc.contributor.author (Authors) Yang, Cheng-Yuen_US
dc.creator (作者) 楊承祐zh_TW
dc.creator (作者) Yang, Cheng-Yuen_US
dc.date (日期) 2021en_US
dc.date.accessioned 2-Mar-2021 14:19:34 (UTC+8)-
dc.date.available 2-Mar-2021 14:19:34 (UTC+8)-
dc.date.issued (上傳時間) 2-Mar-2021 14:19:34 (UTC+8)-
dc.identifier (Other Identifiers) G0107356032en_US
dc.identifier.uri (URI) http://nccur.lib.nccu.edu.tw/handle/140.119/134023-
dc.description (描述) 碩士zh_TW
dc.description (描述) 國立政治大學zh_TW
dc.description (描述) 資訊管理學系zh_TW
dc.description (描述) 107356032zh_TW
dc.description.abstract (摘要) 近年來隨著人工智慧領域的發展,由於深度學習的黑盒模型(Black-box Model)造成預測結果難以理解,使得在技術、法律、經濟及社會等層面上造成人工智慧發展的瓶頸。因此能否從不透明的黑盒模型中找出可被解釋的決策關鍵成為一個至關重要且迫切的研究方向,即所謂的可解釋性人工智慧(eXplainable Artificial Intelligence, XAI)。
然而目前學術上對於可解釋性人工智慧的相關研究處於初步發展階段,較缺乏完整性的脈絡以及綜觀性的統整。因此本研究主要目的為針對可解釋性人工智慧的相關研究主題進行過去發表文獻的彙總與分析,整理目前研究的發展現況及釐清現存問題,並提出可供未來研究人員參考的研究架構。本研究透過Web of Science文獻資料庫平台蒐集學術上現有的可解釋性人工智慧相關文獻,並採用書目計量學(Bibliometric Analysis)的書目分析方法,搭配VOSViewer書目計量學輔助分析軟體,將文獻進行量化以及視覺化的分析,並彙整學術上重要的文獻。同時針對可解釋性人工智慧的相關技術及評估方法進行架構性的統整,提供技術層面的基本認識以促使相關研究發展。最後彙整目前可解釋性人工智慧研究的相關問題及發展限制,提供未來研究人員的發展方向。
zh_TW
dc.description.abstract (摘要) Recently, Artificial Intelligence (AI) and deep learning are popular in predictive modeling and decision making, but the process of producing results are not transparent and sometimes hard to understand. This becomes a bottleneck for adopting artificial intelligence from technical, legal, economic, and social aspects. Hence, making AI decisions explainable from the opaque black-box model has become an important and imperative research direction, which is called eXplainable Artificial Intelligence (XAI). A number of papers related to XAI have been published in different areas, but the issue of explainability involves many different issues that make it hard to have a complete profile for researchers with interests in entering the area. The purpose of this research is to conduct a bibliometric analysis to provide a comprehensive overview of explainable artificial intelligence literature. Published literature are identified, sorted out, and clarified to build a research framework that can be used to guide researchers. Based on our findings, future research issues and constraints of the explainable artificial intelligence are identified. The findings of this research shed much light on understanding the current status and future directions of XAI.en_US
dc.description.tableofcontents 第一章 緒論 1
第一節 研究背景 1
第二節 研究動機 4
第三節 研究目的 5
第四節 研究流程 5
第二章 文獻回顧與探討 7
第一節 可解釋性人工智慧 7
第二節 可解釋性的需求性 11
第三節 書目計量學 13
第三章 研究方法 18
第一節 採用研究工具 18
第二節 選擇研究項目 20
第三節 選擇分析方式 23
第四節 文獻資料及檢索策略 24
第四章 研究架構與書目分析結果 32
第一節 可解釋性的研究主題及研究趨勢 32
第二節 可解釋性的應用技術與技術評估 75
第三節 可解釋性的現存問題與發展限制 87
第五章 結論與建議 90
第一節 研究結論 90
第二節 研究貢獻 97
第三節 研究限制 98
第四節 未來方向 98
參考文獻 99
附錄:書目計量學分析資料 103
zh_TW
dc.format.extent 6626837 bytes-
dc.format.mimetype application/pdf-
dc.source.uri (資料來源) http://thesis.lib.nccu.edu.tw/record/#G0107356032en_US
dc.subject (關鍵詞) 可解釋性人工智慧zh_TW
dc.subject (關鍵詞) 深度學習zh_TW
dc.subject (關鍵詞) 黑盒模型zh_TW
dc.subject (關鍵詞) 研究架構zh_TW
dc.subject (關鍵詞) 書目計量學zh_TW
dc.subject (關鍵詞) Explainable Artificial Intelligenceen_US
dc.subject (關鍵詞) Deep Learningen_US
dc.subject (關鍵詞) Black-box Modelen_US
dc.subject (關鍵詞) Research Frameworken_US
dc.subject (關鍵詞) Bibliometric Analysisen_US
dc.title (題名) 可解釋性人工智慧的研究架構及書目計量學分析zh_TW
dc.title (題名) Explainable Artificial Intelligence: Research Framework and A Bibliometric Analysisen_US
dc.type (資料類型) thesisen_US
dc.relation.reference (參考文獻) [1] Arras, L., Horn, F., Montavon, G., Müller, K. R., & Samek, W. (2017). " What is relevant in a text document?": An interpretable machine learning approach. PloS one, 12(8), e0181142.
[2] Arrieta, A. B., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., et al. (2020). Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion, 58, 82-115.
[3] Bach, S., Binder, A., Montavon, G., Klauschen, F., Müller, K. R., & Samek, W. (2015). On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PloS one, 10(7), e0130140.
[4] Biran & C. Cotton. (2017). Explanation and justification in machine learning: A survey. In IJCAI 2017 Workshop on Explainable Artificial Intelligence (XAI), Vol. 8, No. 1, pp. 8-13
[5] Caruana, R., Lou, Y., Gehrke, J., Koch, P., Sturm, M., & Elhadad, N. (2015, August). Intelligible models for healthcare: Predicting pneumonia risk and hospital 30-day readmission. In Proceedings of the 21th ACM SIGKDD international conference on knowledge discovery and data mining (pp. 1721-1730).
[6] Christoph Molnar. (2019). Interpretable Machine Learning: A Guide for Making Black Box Models Explainable. Available at https://christophm.github.io/interpretable-ml-book/.
[7] Cobo, M. J., López‐Herrera, A. G., Herrera‐Viedma, E., & Herrera, F. (2012). SciMAT: A new science mapping analysis software tool. Journal of the American Society for Information Science and Technology, 63(8), 1609-1630.
[8] Defense Advanced Research Projects Agency [DARPA]. (2016). Explainable Artificial Intelligence (XAI) Program. Retrieved from: https://www.darpa.mil/attachments/DARPA-BAA-16-53.pdf
[9] Doran, D., Schulz, S., & Besold, T. R. (2017). What does explainable AI really mean? A new conceptualization of perspectives. arXiv preprint arXiv:1710.00794.
[10] Doshi-Velez, F., & Kim, B. (2017). Towards A Rigorous Science of Interpretable Machine Learning. arXiv Preprint arXiv:1702.08608v2.
[11] Friedman, Jerome H. (2001). Greedy function approximation: A gradient boosting machine. Annals of statistics, 29 (5), 1189-1232.
[12] Gall, R. (2018). Machine learning explainability vs interpretability: two concepts that could help restore trust in AI. KDnuggets. Kdnuggets. https://www. kdnuggets. com/2018/12/machine-learning-explainabilit y-interpretability-ai. html.
[13] Goodman, B., & Flaxman, S. (2017). European Union regulations on algorithmic decision-making and a “right to explanation”. AI magazine, 38(3), 50-57.
[14] He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770-778).
[15] International Joint Conference on Artificial Intelligence [IJCAI]. (2017). IJCAI 2017 Workshop on Explainable Artificial Intelligence (XAI). Retrieved from: http://home.earthlink.net/~dwaha/research/meetings/ijcai17-xai
[16] J. R. Quinlan. (1987) Simplifying decision trees. International journal of man-machine studies, 27 (3), 221-234
[17] Khan, K. S., Kunz, R., Kleijnen, J., & Antes, G. (2003). Five steps to conducting a systematic review. Journal of the Royal Society of Medicine, 96(3), 118-121. https://doi.org/10.1258/jrsm.96.3.118
[18] Kobsa, A. (1984). What is explained by AI models?. Communication & Cognition.
[19] Krauskopf, E. (2018). A bibiliometric analysis of the Journal of Infection and Public Health: 2008-2016. Journal of infection and public health, 11(2), 224-229.
[20] Lipton, Z.C. (2016). The mythos of model interpretability. Workshop on Human Interpretability in Machine Learning.
[21] McDermott, D., Waldrop, M. M., Chandrasekaran, B., McDermott, J., & Schank, R. (Kaiming). The Dark Ages of AI: A Panel Discussion at AAAI-84. AI Magazine, 6(3), 122.
[22] Miller, T. (2019). Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence, 267, 1-38.
[23] Peng, C. Y. J., Lee, K. L., & Ingersoll, G. M. (2002). An introduction to logistic regression analysis and reporting. The journal of educational research, 96(1), 3-14.
[24] Persson, O., Danell, R., & Schneider, J. W. (2009). How to use Bibexcel for various types of bibliometric analysis. Celebrating scholarly communication studies: A Festschrift for Olle Persson at his 60th Birthday, 5, 9-24.
[25] Pieters, W. (2011). Explanation and trust: what to tell the user in security and AI?. Ethics and information technology, 13(1), 53-64.
[26] Pritchard, A. (1969). Statistical bibliography or bibliometrics. Journal of documentation, 25(4), 348-349.
[27] Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). " Why should I trust you?" Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, 1135-1144.
[28] Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence, 1(5), 206-215..
[29] Samek, W., Wiegand, T., & Müller, K. R. (2017). Explainable artificial intelligence: Understanding, visualizing and interpreting deep learning models. arXiv preprint arXiv:1708.08296.
[30] Schildt, H. A. (2002). Sitkis: software for bibliometric data management and analysis. Helsinki: Institute of Strategy and International Business, 6, 1.
[31] Schuchmann, Sebastian. (2019). Analyzing the Prospect of an Approaching AI Winter. 10.13140/RG.2.2.10932.91524.
[32] Skirpan, M., & Yeh, T. (2017). Designing a moral compass for the future of computer vision using speculative analysis. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, 64-73.
[33] Stumpf, S., Rajaram, V., Li, L., Wong, W. K., Burnett, M., Dietterich, T., et al. (2009). Interacting meaningfully with machine learning systems: Three experiments. International Journal of Human-Computer Studies, 67(8), 639-662.
[34] Van Eck, N. J., & Waltman, L. (2010). Software survey: VOSviewer, a computer program for bibliometric mapping. Scientometrics, 84(2), 523-538.
[35] Van Eck, N.J., & Waltman, L. (2009). How to normalize cooccurrence data? An analysis of some well-known similarity measures. Journal of the American Society for Information Science and Technology, 60(8), 1635-1651
[36] Van Eck, N.J., & Waltman, L. (2020). VOSviewer Manual. Retrieved from: https://www.vosviewer.com/download/f-33t2.pdf
[37] Wang, Q. (2018). Distribution features and intellectual structures of digital humanities. Journal of Documentation.
[38] Xu, Z., & Yu, D. (2019). A Bibliometrics analysis on big data research (2009-2018). Journal of Data, Information and Management, 1(1), 3-15.
[39] Zupic, I., & Čater, T. (2015). Bibliometric methods in management and organization. Organizational Research Methods, 18(3), 429-472.
zh_TW
dc.identifier.doi (DOI) 10.6814/NCCU202100316en_US