Publications-Theses

Article View/Open

Publication Export

Google ScholarTM

NCCU Library

Citation Infomation

Related Publications in TAIR

題名 機器學習可解釋技術在商業智慧中對使用者信任之影響
The Effect of Explanation on User Trust in Business Intelligence
作者 侯亮宇
Hou, Liang-Yu
貢獻者 林怡伶
Lin, Yi-Ling
侯亮宇
Hou, Liang-Yu
關鍵詞 人機互動
機器學習
資訊視覺化
可解釋性人工智慧
信任
Human computer interaction
machine learning
information visualization
trust
explainable artificial intelligence
XAI
日期 2021
上傳時間 2-Sep-2021 15:58:15 (UTC+8)
摘要 近年來機器學習引發了人工智慧 (Artificial Intelligence, AI) 應用的新趨勢。 AI 被應用於越來越複雜的任務和領域中。然而,大多數 AI 模型都在黑盒(Black box)中運行,導致人們難以理解或是分辨機器的運作以及決策過程。目前,可解 釋性人工智慧(Explainable Artificial Intelligence, XAI),大多著重於底層演算法的 解釋,並且集中於解釋圖形識別的結果。針對終端使用者的 XAI 應用則較多專 注於支援醫療保健領域的人類決策,少有研究調查商業領域的 AI 應用程序如何 與解釋性技術相結合。本研究以商業應用上終端使用者為中心為實際業務領域中 運用 AI 技術提出了一個通用的解釋框架。該框架基於商業智慧(Business Intelligence,BI) 所開發,為終端使用者提供在機器學習不同階段的完整解釋。為 了實踐我們的框架,我們在一個航空公司行李重量預測案例上應用了這個解釋性 架構。最後,為衡量該框架實踐後的有效性,我們在 Amazon Mechanical Turk 上 進行了實驗。我們的結果表明,使用解釋性框架的參與者對模型預測更有信心, 並且更信任系統,更願意採用系統提供的建議。我們的研究使企業能夠擴展他們 的商業智能,並結合這個解釋框架的不同階段,以提高機器學習技術在商業應用 中的透明度和可靠性。
Recently, machine learning has sparked a new trend in artificial intelligence (AI) applications. AI is applied to increasingly complex tasks and in many areas. Most AI models are running in a black box resulting in difficulty for understanding. From image recognition to sentiment analysis, XAI is used to support human decision-making in the healthcare domain, yet little research has been done to investigate how AI applications in the commercial domain can be integrated with explanatory techniques. This study proposes a generalized interpretative framework for end-user-centric applications in the business domain. The framework enables the provision of complete explanations to end users at different stages based on business intelligence. To validate our framework, we applied this explanatory framework in practice using an airline baggage weight prediction case. Finally, in order to measure the effectiveness of the framework in practice, we conducted an online experiment at Mturk. Our results show that participants who use the explanatory framework have more confidence in the model predictions, trust the system, and are more willing to adopt the recommendations provided by the system. Our research allows companies to extend their business intelligence and combine different stages of this explanatory framework to improve the transparency and reliability of machine learning technology in business applications.
參考文獻 Adadi, A., & Berrada, M. (2018). Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI). IEEE Access.
Alber, M., Lapuschkin, S., Seegerer, P., Hägele, M., Schütt, K. T., Montavon, G., Samek, W., Müller, K. R., Dähne, S., & Kindermans, P. J. (2019). INNvestigate neural networks! Journal of Machine Learning Research.
Allen, W. L. (2018). Visual brokerage: Communicating data and research through visualisation. Public Understanding of Science, 27(8), 906–922.
Amershi, S., Weld, D., Vorvoreanu, M., Fourney, A., Nushi, B., Collisson, P., Suh, J., Iqbal, S., Bennett, P. N., Inkpen, K., Teevan, J., Kikin-Gil, R., & Horvitz, E. (2019). Guidelines for human-AI interaction. Conference on Human Factors in Computing Systems - Proceedings.
Apley, D. W., & Zhu, J. (2020). Visualizing the effects of predictor variables in black box supervised learning models. Journal of the Royal Statistical Society. Series B: Statistical Methodology.
Bach, S., Binder, A., Montavon, G., Klauschen, F., Müller, K. R., & Samek, W. (2015). On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PLoS ONE.
Barredo Arrieta, A., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., Garcia, S., Gil-Lopez, S., Molina, D., Benjamins, R., Chatila, R., & Herrera, F. (2020a). Explainable Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion, 58, 82–115.
Barredo Arrieta, A., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., Garcia, S., Gil-Lopez, S., Molina, D., Benjamins, R., Chatila, R., & Herrera, F. (2020b). Explainable Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information
Fusion.
Bien, J., & Tibshirani, R. (2011). Prototype selection for interpretable classification. Annals of Applied Statistics.
Borkin, M. A., Vo, A. A., Bylinskii, Z., Isola, P., Sunkavalli, S., Oliva, A., & Pfister, H. (2013). What makes a visualization memorable. IEEE Transactions on
Visualization and Computer Graphics.
Bruls, M., Huizing, K., & van Wijk, J. J. (2000). Squarified Treemaps.
Burton, B., Geishecker, L., Schlegel, K., Hostmann, B., Austin, T., Herschel, G., Rayner, N., Sallam, R. L., Richardson, J., Hagerty, J., & Hostmann, B. (2006). Magic Quadrant for Business Intelligence Platforms WHAT YOU NEED TO KNOW. Gartner Research, January, 1–5. http://www.gartner.com/technology/about/ombudsman/omb_guide2.jsp
Cai, C. J., Jongejan, J., & Holbrook, J. (2019). The effects of example-based explanations in a machine learning interface. International Conference on Intelligent User Interfaces, Proceedings IUI, Part F1476, 258–262.
Carvalho, D. V., Pereira, E. M., & Cardoso, J. S. (2019). Machine learning interpretability: A survey on methods and metrics. In Electronics (Switzerland).
Cawthon, N., & Moere, A. Vande. (2007). The effect of aesthetic on the usability of data visualization. Proceedings of the International Conference on Information Visualisation.
Chati, Y. S., & Balakrishnan, H. (2017). A Gaussian Process Regression approach to model aircraft engine fuel flow rate. Proceedings - 2017 ACM/IEEE 8th International Conference on Cyber-Physical Systems, ICCPS 2017 (Part of CPS Week).
Chaudhuri, S., Dayal, U., & Narasayya, V. (2011). An overview of business intelligence technology. In Communications of the ACM (Vol. 54, Issue 8).
Chen, H., Chiang, R. H. L., & Storey, V. C. (2012). Business intelligence and analytics: From big data to big impact. MIS Quarterly: Management Information Systems,
36(4).
Clancey, W. J. (1983). The epistemology of a rule-based expert system -a framework for explanation. Artificial Intelligence.
Collins, C. R., & Stephenson, K. (2003). A circle packing algorithm. Computational Geometry: Theory and Applications, 25(3).
Davis, B., Glenski, M., Sealy, W., & Arendt, D. (2020). Measure Utility, Gain Trust: Practical Advice for XAI Researchers. Proceedings - 2020 IEEE Workshop on
TRust and EXpertise in Visual Analytics, TREX 2020, 1–8.
Dawes, R. M. (1979). The robust beauty of improper linear models in decision making. American Psychologist, 34(7).
Deng, X., & Chi, L. (2012). Understanding postadoptive behaviors in information systems use: A longitudinal analysis of system use problems in the business
intelligence context. Journal of Management Information Systems, 29(3).
Desai, M., Kaniarasu, P., Medvedev, M., Steinfeld, A., & Yanco, H. (2013). Impact of robot failures and feedback on real-time trust. ACM/IEEE International
Conference on Human-Robot Interaction.
Dietvorst, B. J., Simmons, J. P., & Massey, C. (2015). Algorithm aversion: People erroneously avoid algorithms after seeing them err. Journal of Experimental
Psychology: General, 144(1).
Doshi-Velez, F., & Kim, B. (2017). Towards A Rigorous Science of Interpretable Machine Learning. Ml, 1–13. http://arxiv.org/abs/1702.08608
Dresner, H. (2001). Business Intelligence in 2002: A Coming of Age - 103282.pdf. Gartner.
Freedy, A., DeVisser, E., Weltman, G., & Coeyman, N. (2007). Measurement of trust in human-robot collaboration. Proceedings of the 2007 International Symposium on Collaborative Technologies and Systems, CTS.
Friedman, J. H. (2001). Greedy function approximation: A gradient boosting machine. Annals of Statistics.
Friedman, J. H., & Meulman, J. J. (2003). Multiple additive regression trees with application in epidemiology. Statistics in Medicine, 22(9).
Gillespie, PhD, MA, RN, T. W. (2012). Understanding Waterfall Plots. Journal of the Advanced Practitioner in Oncology, 3(2).
Glass, A., McGuinness, D. L., & Wolverton, M. (2008). Toward establishing trust in adaptive agents. International Conference on Intelligent User Interfaces, Proceedings IUI.
Goodman, B., & Flaxman, S. (2017). European union regulations on algorithmic decision making and a “right to explanation.” AI Magazine.
Gorchels, L. (2000). The Product Manager’s Handbook. In NTC Business Books. Groom, V., & Nass, C. (2007). Can robots be teammates? Benchmarks in human-robot
teams. Interaction Studies.
Henelius, A., Puolamäki, K., Boström, H., Asker, L., & Papapetrou, P. (2014). A peek into the black box: Exploring classifiers by randomization. Data Mining and
Knowledge Discovery.
Hoffrage, U., & Gigerenzer, G. (1998). Using natural frequencies to improve diagnostic inferences. Academic Medicine.
Hong, S., & Zhang, A. (2010). An efficiency study of airlines and air cargo/passenger divisions: A DEA approach. World Review of Intermodal Transportation
Research.
Inselberg, A., & Dimsdale, B. (1990). Parallel coordinates: A tool for visualizing multi-dimensional geometry.
Itti, L., Koch, C., & Niebur, E. (1998). A model of saliency-based visual attention for rapid scene analysis. IEEE Transactions on Pattern Analysis and Machine
Intelligence, 20(11).
Jiang, C., & Zheng, S. (2020). Airline baggage fees and airport congestion. Transportation Research Part C: Emerging Technologies.
Kay, M., Patel, S. N., & Kientz, J. A. (2015). How good is 85%? A survey tool to connect classifier evaluation to acceptability of accuracy. Conference on Human
Factors in Computing Systems - Proceedings, 2015-April.
Kim, Y. S., Walls, L. A., Krafft, P., & Hullman, J. (2019). A Bayesian cognition approach to improve data visualization. Conference on Human Factors in Computing Systems - Proceedings, 1–14.
Koh, P. W., & Liang, P. (2017). Understanding black-box predictions via influence functions. 34th International Conference on Machine Learning, ICML 2017, 4, 2976–2987.
Kosara, R. (2016). Presentation-Oriented Visualization Techniques. IEEE Computer Graphics and Applications, 36(1).
Krause, J., Perer, A., & Ng, K. (2016). Interacting with predictions: Visual inspection of black-box machine learning models. Conference on Human Factors in
Computing Systems - Proceedings, 5686–5697.
Langley, P., & Simon, H. A. (1995). Applications of Machine Learning and Rule Induction. Communications of the ACM.
Lapuschkin, S., Binder, A., Montavon, G., Muller, K. R., & Samek, W. (2016). Analyzing Classifiers: Fisher Vectors and Deep Neural Networks. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition.
Le Bras, P., Robb, D. A., Methven, T. S., Padilla, S., & Chantler, M. J. (2018). Improving user confidence in concept maps: Exploring data driven explanations. Conference on Human Factors in Computing Systems - Proceedings, 2018-April.
LeBaron, B. (2001). Evolution and time horizons in an agent-based stock market. Macroeconomic Dynamics.
Liao, Q. V., Gruen, D., & Miller, S. (2020). Questioning the AI: Informing Design Practices for Explainable AI User Experiences. Conference on Human Factors in Computing Systems - Proceedings.
Lipton, Z. C. (2018). The mythos of model interpretability. Communications of the ACM, 61(10).
Lou, Y., Caruana, R., Gehrke, J., & Hooker, G. (2013). Accurate intelligible models with pairwise interactions. Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining.
Lundberg, S. M., & Lee, S. I. (2017). A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems.
Madsen, M., & Gregor, S. (2000). Measuring Human-Computer Trust. Proceedings of Eleventh Australasian Conference on Information Systems, 6–8.
Manikandan, S. (2011). Measures of central tendency: Median and mode. In Journal of Pharmacology and Pharmacotherapeutics (Vol. 2, Issue 3).
Mayer, R. C., Davis, J. H., & Schoorman, F. D. (1995). An Integrative Model of Organizational Trust. Academy of Management Review, 20(3), 709–734.
McAllister, D. J. (1995). Affect- and Cognition-Based Trust as Foundations for Interpersonal Cooperation in Organizations. Academy of Management Journal, 38(1).
McGovern, A., Lagerquist, R., Gagne, D. J., Jergensen, G. E., Elmore, K. L., Homeyer, C. R., & Smith, T. (2019). Making the black box more transparent: Understanding the physical implications of machine learning. Bulletin of the American Meteorological Society.
Miller, T. (2019). Explanation in artificial intelligence: Insights from the social sciences.
In Artificial Intelligence.
Molnar, C. (2019). Interpretable Machine Learning. A Guide for Making Black Box Models Explainable. Book, 247. https://christophm.github.io/interpretable-ml-book
Nguyen, A., Yosinski, J., & Clune, J. (2015). Deep neural networks are easily fooled: High confidence predictions for unrecognizable images. Proceedings of the IEEE
Computer Society Conference on Computer Vision and Pattern Recognition. Nicolae, M., Arikan, M., Deshpande, V., & Ferguson, M. (2017). Do bags fly free? An empirical analysis of the operational implications of airline baggage fees. In Management Science.
Nunes, I., & Jannach, D. (2017). A systematic review and taxonomy of explanations in decision support and recommender systems. User Modeling and User-Adapted
Interaction, 27(3–5), 393–444.
Pandey, A. V., Manivannan, A., Nov, O., Satterthwaite, M., & Bertini, E. (2014). The persuasive power of data visualization. IEEE Transactions on Visualization and
Computer Graphics, 20(12), 2211–2220.
Panniello, U., Gorgoglione, M., & Tuzhilin, A. (2016). In CARSs we trust: How context-aware recommendations affect customers’ trust and other business performance measures of recommender systems. Information Systems Research, 27(1).
Parasuraman, R., & Riley, V. (1997). Humans and automation: Use, misuse, disuse, abuse. Human Factors.
Perrotta, F., Parry, T., & Neves, L. C. (2017). Application of machine learning for fuel consumption modelling of trucks. Proceedings - 2017 IEEE International Conference on Big Data, Big Data 2017.
Pieters, W. (2011). Explanation and trust: What to tell the user in security and AI? Ethics and Information Technology, 13(1).
Pizer, S. M., Amburn, E. P., Austin, J. D., Cromartie, R., Geselowitz, A., Greer, T., ter Haar Romeny, B., Zimmerman, J. B., & Zuiderveld, K. (1987). Adaptive histogram equalization and its variations. Computer Vision, Graphics, and Image Processing, 38(1), 99.
Poursabzi-Sangdeh, F., Goldstein, D. G., & Hofman, J. M. (2021). Manipulating and measuring model interpretability. In Conference on Human Factors in Computing Systems - Proceedings.
Power, D. J. (2002). Decision Support Systems: Concepts and Resources for Managers. In Information Systems Management (Vol. 20, Issue 4).
Putnam, V., & Conati, C. (2019). Exploring the need for explainable artificial intelligence (XAI) in intelligent tutoring systems (ITS). CEUR Workshop Proceedings.
Quinlan, J. R. (1987). Simplifying decision trees. International Journal of Man- Machine Studies.
Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). “Why should i trust you?” Explaining the predictions of any classifier. Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144.
Robertson, G., Fernandez, R., Fisher, D., Lee, B., & Stasko, J. (2008). Effectiveness of animation in trend visualization. IEEE Transactions on Visualization and Computer Graphics, 14(6), 1325–1332.
Rose, J. M., Hensher, D. A., Greene, W. H., & Washington, S. P. (2012). Attribute exclusion strategies in airline choice: Accounting for exogenous information on decision maker processing strategies in models of discrete choice.
Transportmetrica.
Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence, 1(5), 206–215.
Samek, W., Wiegand, T., & Müller, K.-R. (2017). Explainable Artificial Intelligence: Understanding, Visualizing and Interpreting Deep Learning Models.
http://arxiv.org/abs/1708.08296
Sautto, J. M. (2014). Decision Support Systems for Business Intelligence, 2nd edition. In Investigación Operacional (Vol. 35, Issue 1).
Saxena, R., & Srinivasan, A. (2013). Business intelligence. In International Series in
Operations Research and Management Science.
Shafiei, F., & Sundaram, D. (2004). Multi-enterprise collaborative enterprise resource planning and decision support systems. Proceedings of the Hawaii International
Conference on System Sciences, 37.
Simonyan, K., Vedaldi, A., & Zisserman, A. (2014). Deep inside convolutional networks: Visualising image classification models and saliency maps. 2nd International Conference on Learning Representations, ICLR 2014 - Workshop Track Proceedings.
Swartout, W. R. (1983). XPLAIN: a system for creating and explaining expert consulting programs. Artificial Intelligence.
Touchette, P. E., MacDonald, R. F., & Langer, S. N. (1985). A scatter plot for identifying stimulus control of problem behavior. Journal of Applied Behavior Analysis, 18(4).
Trani, A. A., Wing-Ho, F. C., Schilling, G., Baik, H., & Seshadri, A. (2004). A neural network model to estimate aircraft fuel consumption. Collection of Technical Papers - AIAA 4th Aviation Technology, Integration, and Operations Forum, ATIO.
van Wijk, J. J., & van de Wetering, H. (1999). Cushion treemaps: visualization of hierarchical information. Proceedings of the IEEE Symposium on Information
Visualization.
Vassiliades, A., Bassiliades, N., & Patkos, T. (2021). Argumentation and explainable artificial intelligence: A survey. In Knowledge Engineering Review.
Veale, M., Van Kleek, M., & Binns, R. (2018). Fairness and accountability design needs for algorithmic support in high-stakes public sector decision-making. Conference on Human Factors in Computing Systems - Proceedings, 2018-April. Vilone, G., & Longo, L. (2020). Explainable Artificial Intelligence: a Systematic Review. Dl. http://arxiv.org/abs/2006.00093
Wang, D., Yang, Q., Abdul, A., Lim, B. Y., & States, U. (2019). Designing Theory-Driven User-Centric Explainable AI. 1–15.
Wang, J., Gou, L., Yang, H., & Shen, H. W. (2018). GANViz: A Visual Analytics Approach to Understand the Adversarial Game. IEEE Transactions on Visualization and Computer Graphics, 24(6).
Wang, N., Pynadath, D. V., & Hill, S. G. (2016). Trust calibration within a human-robot team: Comparing automatically generated explanations. ACM/IEEE
International Conference on Human-Robot Interaction.
Wong, W. H., Zhang, A., Van Hui, Y., & Leung, L. C. (2009). Optimal baggage-limit policy: Airline passenger and cargo allocation. Transportation Science, 43(3),
355–369.
Xia, M., Asano, Y., Williams, J. J., Qu, H., & Ma, X. (2020). Using Information Visualization to Promote Students’ Reflection on “gaming the System” in Online
Learning. L@S 2020 - Proceedings of the 7th ACM Conference on Learning @ Scale, 37–49.
Yagoda, R. E., & Gillan, D. J. (2012). You Want Me to Trust a ROBOT? The Development of a Human-Robot Interaction Trust Scale. International Journal of
Social Robotics, 4(3).
Yang, F., Huang, Z., Scholtz, J., & Arendt, D. L. (2020). How do visual explanations foster end users’ appropriate trust in machine learning? International Conference
on Intelligent User Interfaces, Proceedings IUI.
Yu, K., Taib, R., Berkovsky, S., Zhou, J., Conway, D., & Chen, F. (2016). Trust and Reliance based on system accuracy. UMAP 2016 - Proceedings of the 2016
Conference on User Modeling Adaptation and Personalization.
Yur, E., & Vasil, V. (2013). Analytical Review of Data Visualization Methods in Application to Big Data. Journal of Electrical and Computer Engineering, 2013, Article ID 969458.
Zhang, Y., Vera Liao, Q., & Bellamy, R. K. E. (2020). Effect of confidence and explanation on accuracy and trust calibration in AI-assisted decision making.
FAT* 2020 - Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency.
描述 碩士
國立政治大學
資訊管理學系
108356028
資料來源 http://thesis.lib.nccu.edu.tw/record/#G0108356028
資料類型 thesis
dc.contributor.advisor 林怡伶zh_TW
dc.contributor.advisor Lin, Yi-Lingen_US
dc.contributor.author (Authors) 侯亮宇zh_TW
dc.contributor.author (Authors) Hou, Liang-Yuen_US
dc.creator (作者) 侯亮宇zh_TW
dc.creator (作者) Hou, Liang-Yuen_US
dc.date (日期) 2021en_US
dc.date.accessioned 2-Sep-2021 15:58:15 (UTC+8)-
dc.date.available 2-Sep-2021 15:58:15 (UTC+8)-
dc.date.issued (上傳時間) 2-Sep-2021 15:58:15 (UTC+8)-
dc.identifier (Other Identifiers) G0108356028en_US
dc.identifier.uri (URI) http://nccur.lib.nccu.edu.tw/handle/140.119/136850-
dc.description (描述) 碩士zh_TW
dc.description (描述) 國立政治大學zh_TW
dc.description (描述) 資訊管理學系zh_TW
dc.description (描述) 108356028zh_TW
dc.description.abstract (摘要) 近年來機器學習引發了人工智慧 (Artificial Intelligence, AI) 應用的新趨勢。 AI 被應用於越來越複雜的任務和領域中。然而,大多數 AI 模型都在黑盒(Black box)中運行,導致人們難以理解或是分辨機器的運作以及決策過程。目前,可解 釋性人工智慧(Explainable Artificial Intelligence, XAI),大多著重於底層演算法的 解釋,並且集中於解釋圖形識別的結果。針對終端使用者的 XAI 應用則較多專 注於支援醫療保健領域的人類決策,少有研究調查商業領域的 AI 應用程序如何 與解釋性技術相結合。本研究以商業應用上終端使用者為中心為實際業務領域中 運用 AI 技術提出了一個通用的解釋框架。該框架基於商業智慧(Business Intelligence,BI) 所開發,為終端使用者提供在機器學習不同階段的完整解釋。為 了實踐我們的框架,我們在一個航空公司行李重量預測案例上應用了這個解釋性 架構。最後,為衡量該框架實踐後的有效性,我們在 Amazon Mechanical Turk 上 進行了實驗。我們的結果表明,使用解釋性框架的參與者對模型預測更有信心, 並且更信任系統,更願意採用系統提供的建議。我們的研究使企業能夠擴展他們 的商業智能,並結合這個解釋框架的不同階段,以提高機器學習技術在商業應用 中的透明度和可靠性。zh_TW
dc.description.abstract (摘要) Recently, machine learning has sparked a new trend in artificial intelligence (AI) applications. AI is applied to increasingly complex tasks and in many areas. Most AI models are running in a black box resulting in difficulty for understanding. From image recognition to sentiment analysis, XAI is used to support human decision-making in the healthcare domain, yet little research has been done to investigate how AI applications in the commercial domain can be integrated with explanatory techniques. This study proposes a generalized interpretative framework for end-user-centric applications in the business domain. The framework enables the provision of complete explanations to end users at different stages based on business intelligence. To validate our framework, we applied this explanatory framework in practice using an airline baggage weight prediction case. Finally, in order to measure the effectiveness of the framework in practice, we conducted an online experiment at Mturk. Our results show that participants who use the explanatory framework have more confidence in the model predictions, trust the system, and are more willing to adopt the recommendations provided by the system. Our research allows companies to extend their business intelligence and combine different stages of this explanatory framework to improve the transparency and reliability of machine learning technology in business applications.en_US
dc.description.tableofcontents CHAPTER 1 INTRODUCTION 1
1-1 BACKGROUND AND MOTIVATION 1
1-2 RESEARCH QUESTION 2
CHAPTER 2 LITERATURE REVIEW 5
2-1 BUSINESS INTELLIGENCE 5
2-1-1 The Definition of Business Intelligence 5
2-1-2 The Application of Business Intelligence 6
2-1-3 The Tool in Business Intelligence 6
2-1-4 The Challenge of Business Intelligence 8
2-2 EXPLAINABLE ARTIFICIAL INTELLIGENCE (XAI) 9
2-2-1 The Reasons of XAI 9
2-2-2 The Application of XAI 11
2-2-3 The Challenge of XAI 15
2-3 TRUST 17
2-3-1 Trust in Computer Sciences 17
2-3-2 Measuring Human Computer Trust 18
CHAPTER 3 RESEARCH METHODOLOGY 20
3-1 THEORETICAL BACKGROUND 20
3-2 FRAMEWORK DEVELOPMENT 22
3-3 FRAMEWORK EVALUATION 32
CHAPTER 4 CASE STUDY 33
4-1 BUSINESS QUESTION 33
4-2 RELATED WORK 34
4-3 DATASET 35
4-4 DATA PREPROCESSING 35
4-5 MODEL SELECTION AND TRAINING 37
4-6 EXPLANATION FRAMEWORK IMPLEMENTATION 38
CHAPTER 5 EXPERIMENT 42
5-1 TASK AND MATERIAL 42
5-2 PARTICIPANT AND EXPERIMENT PROCEDURE 44
5-3 MEASUREMENT 48
CHAPTER 6 EXPERIMENT RESULT 50
CHAPTER 7 DISCUSSION 60
7-1 GENERAL DISCUSSION 60
7-2 LIMITATION AND FUTURE WORK 64
CHAPTER 8 CONCLUSION 66
REFERENCE 68
zh_TW
dc.format.extent 2790463 bytes-
dc.format.mimetype application/pdf-
dc.source.uri (資料來源) http://thesis.lib.nccu.edu.tw/record/#G0108356028en_US
dc.subject (關鍵詞) 人機互動zh_TW
dc.subject (關鍵詞) 機器學習zh_TW
dc.subject (關鍵詞) 資訊視覺化zh_TW
dc.subject (關鍵詞) 可解釋性人工智慧zh_TW
dc.subject (關鍵詞) 信任zh_TW
dc.subject (關鍵詞) Human computer interactionen_US
dc.subject (關鍵詞) machine learningen_US
dc.subject (關鍵詞) information visualizationen_US
dc.subject (關鍵詞) trusten_US
dc.subject (關鍵詞) explainable artificial intelligenceen_US
dc.subject (關鍵詞) XAIen_US
dc.title (題名) 機器學習可解釋技術在商業智慧中對使用者信任之影響zh_TW
dc.title (題名) The Effect of Explanation on User Trust in Business Intelligenceen_US
dc.type (資料類型) thesisen_US
dc.relation.reference (參考文獻) Adadi, A., & Berrada, M. (2018). Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI). IEEE Access.
Alber, M., Lapuschkin, S., Seegerer, P., Hägele, M., Schütt, K. T., Montavon, G., Samek, W., Müller, K. R., Dähne, S., & Kindermans, P. J. (2019). INNvestigate neural networks! Journal of Machine Learning Research.
Allen, W. L. (2018). Visual brokerage: Communicating data and research through visualisation. Public Understanding of Science, 27(8), 906–922.
Amershi, S., Weld, D., Vorvoreanu, M., Fourney, A., Nushi, B., Collisson, P., Suh, J., Iqbal, S., Bennett, P. N., Inkpen, K., Teevan, J., Kikin-Gil, R., & Horvitz, E. (2019). Guidelines for human-AI interaction. Conference on Human Factors in Computing Systems - Proceedings.
Apley, D. W., & Zhu, J. (2020). Visualizing the effects of predictor variables in black box supervised learning models. Journal of the Royal Statistical Society. Series B: Statistical Methodology.
Bach, S., Binder, A., Montavon, G., Klauschen, F., Müller, K. R., & Samek, W. (2015). On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PLoS ONE.
Barredo Arrieta, A., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., Garcia, S., Gil-Lopez, S., Molina, D., Benjamins, R., Chatila, R., & Herrera, F. (2020a). Explainable Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion, 58, 82–115.
Barredo Arrieta, A., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., Garcia, S., Gil-Lopez, S., Molina, D., Benjamins, R., Chatila, R., & Herrera, F. (2020b). Explainable Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information
Fusion.
Bien, J., & Tibshirani, R. (2011). Prototype selection for interpretable classification. Annals of Applied Statistics.
Borkin, M. A., Vo, A. A., Bylinskii, Z., Isola, P., Sunkavalli, S., Oliva, A., & Pfister, H. (2013). What makes a visualization memorable. IEEE Transactions on
Visualization and Computer Graphics.
Bruls, M., Huizing, K., & van Wijk, J. J. (2000). Squarified Treemaps.
Burton, B., Geishecker, L., Schlegel, K., Hostmann, B., Austin, T., Herschel, G., Rayner, N., Sallam, R. L., Richardson, J., Hagerty, J., & Hostmann, B. (2006). Magic Quadrant for Business Intelligence Platforms WHAT YOU NEED TO KNOW. Gartner Research, January, 1–5. http://www.gartner.com/technology/about/ombudsman/omb_guide2.jsp
Cai, C. J., Jongejan, J., & Holbrook, J. (2019). The effects of example-based explanations in a machine learning interface. International Conference on Intelligent User Interfaces, Proceedings IUI, Part F1476, 258–262.
Carvalho, D. V., Pereira, E. M., & Cardoso, J. S. (2019). Machine learning interpretability: A survey on methods and metrics. In Electronics (Switzerland).
Cawthon, N., & Moere, A. Vande. (2007). The effect of aesthetic on the usability of data visualization. Proceedings of the International Conference on Information Visualisation.
Chati, Y. S., & Balakrishnan, H. (2017). A Gaussian Process Regression approach to model aircraft engine fuel flow rate. Proceedings - 2017 ACM/IEEE 8th International Conference on Cyber-Physical Systems, ICCPS 2017 (Part of CPS Week).
Chaudhuri, S., Dayal, U., & Narasayya, V. (2011). An overview of business intelligence technology. In Communications of the ACM (Vol. 54, Issue 8).
Chen, H., Chiang, R. H. L., & Storey, V. C. (2012). Business intelligence and analytics: From big data to big impact. MIS Quarterly: Management Information Systems,
36(4).
Clancey, W. J. (1983). The epistemology of a rule-based expert system -a framework for explanation. Artificial Intelligence.
Collins, C. R., & Stephenson, K. (2003). A circle packing algorithm. Computational Geometry: Theory and Applications, 25(3).
Davis, B., Glenski, M., Sealy, W., & Arendt, D. (2020). Measure Utility, Gain Trust: Practical Advice for XAI Researchers. Proceedings - 2020 IEEE Workshop on
TRust and EXpertise in Visual Analytics, TREX 2020, 1–8.
Dawes, R. M. (1979). The robust beauty of improper linear models in decision making. American Psychologist, 34(7).
Deng, X., & Chi, L. (2012). Understanding postadoptive behaviors in information systems use: A longitudinal analysis of system use problems in the business
intelligence context. Journal of Management Information Systems, 29(3).
Desai, M., Kaniarasu, P., Medvedev, M., Steinfeld, A., & Yanco, H. (2013). Impact of robot failures and feedback on real-time trust. ACM/IEEE International
Conference on Human-Robot Interaction.
Dietvorst, B. J., Simmons, J. P., & Massey, C. (2015). Algorithm aversion: People erroneously avoid algorithms after seeing them err. Journal of Experimental
Psychology: General, 144(1).
Doshi-Velez, F., & Kim, B. (2017). Towards A Rigorous Science of Interpretable Machine Learning. Ml, 1–13. http://arxiv.org/abs/1702.08608
Dresner, H. (2001). Business Intelligence in 2002: A Coming of Age - 103282.pdf. Gartner.
Freedy, A., DeVisser, E., Weltman, G., & Coeyman, N. (2007). Measurement of trust in human-robot collaboration. Proceedings of the 2007 International Symposium on Collaborative Technologies and Systems, CTS.
Friedman, J. H. (2001). Greedy function approximation: A gradient boosting machine. Annals of Statistics.
Friedman, J. H., & Meulman, J. J. (2003). Multiple additive regression trees with application in epidemiology. Statistics in Medicine, 22(9).
Gillespie, PhD, MA, RN, T. W. (2012). Understanding Waterfall Plots. Journal of the Advanced Practitioner in Oncology, 3(2).
Glass, A., McGuinness, D. L., & Wolverton, M. (2008). Toward establishing trust in adaptive agents. International Conference on Intelligent User Interfaces, Proceedings IUI.
Goodman, B., & Flaxman, S. (2017). European union regulations on algorithmic decision making and a “right to explanation.” AI Magazine.
Gorchels, L. (2000). The Product Manager’s Handbook. In NTC Business Books. Groom, V., & Nass, C. (2007). Can robots be teammates? Benchmarks in human-robot
teams. Interaction Studies.
Henelius, A., Puolamäki, K., Boström, H., Asker, L., & Papapetrou, P. (2014). A peek into the black box: Exploring classifiers by randomization. Data Mining and
Knowledge Discovery.
Hoffrage, U., & Gigerenzer, G. (1998). Using natural frequencies to improve diagnostic inferences. Academic Medicine.
Hong, S., & Zhang, A. (2010). An efficiency study of airlines and air cargo/passenger divisions: A DEA approach. World Review of Intermodal Transportation
Research.
Inselberg, A., & Dimsdale, B. (1990). Parallel coordinates: A tool for visualizing multi-dimensional geometry.
Itti, L., Koch, C., & Niebur, E. (1998). A model of saliency-based visual attention for rapid scene analysis. IEEE Transactions on Pattern Analysis and Machine
Intelligence, 20(11).
Jiang, C., & Zheng, S. (2020). Airline baggage fees and airport congestion. Transportation Research Part C: Emerging Technologies.
Kay, M., Patel, S. N., & Kientz, J. A. (2015). How good is 85%? A survey tool to connect classifier evaluation to acceptability of accuracy. Conference on Human
Factors in Computing Systems - Proceedings, 2015-April.
Kim, Y. S., Walls, L. A., Krafft, P., & Hullman, J. (2019). A Bayesian cognition approach to improve data visualization. Conference on Human Factors in Computing Systems - Proceedings, 1–14.
Koh, P. W., & Liang, P. (2017). Understanding black-box predictions via influence functions. 34th International Conference on Machine Learning, ICML 2017, 4, 2976–2987.
Kosara, R. (2016). Presentation-Oriented Visualization Techniques. IEEE Computer Graphics and Applications, 36(1).
Krause, J., Perer, A., & Ng, K. (2016). Interacting with predictions: Visual inspection of black-box machine learning models. Conference on Human Factors in
Computing Systems - Proceedings, 5686–5697.
Langley, P., & Simon, H. A. (1995). Applications of Machine Learning and Rule Induction. Communications of the ACM.
Lapuschkin, S., Binder, A., Montavon, G., Muller, K. R., & Samek, W. (2016). Analyzing Classifiers: Fisher Vectors and Deep Neural Networks. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition.
Le Bras, P., Robb, D. A., Methven, T. S., Padilla, S., & Chantler, M. J. (2018). Improving user confidence in concept maps: Exploring data driven explanations. Conference on Human Factors in Computing Systems - Proceedings, 2018-April.
LeBaron, B. (2001). Evolution and time horizons in an agent-based stock market. Macroeconomic Dynamics.
Liao, Q. V., Gruen, D., & Miller, S. (2020). Questioning the AI: Informing Design Practices for Explainable AI User Experiences. Conference on Human Factors in Computing Systems - Proceedings.
Lipton, Z. C. (2018). The mythos of model interpretability. Communications of the ACM, 61(10).
Lou, Y., Caruana, R., Gehrke, J., & Hooker, G. (2013). Accurate intelligible models with pairwise interactions. Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining.
Lundberg, S. M., & Lee, S. I. (2017). A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems.
Madsen, M., & Gregor, S. (2000). Measuring Human-Computer Trust. Proceedings of Eleventh Australasian Conference on Information Systems, 6–8.
Manikandan, S. (2011). Measures of central tendency: Median and mode. In Journal of Pharmacology and Pharmacotherapeutics (Vol. 2, Issue 3).
Mayer, R. C., Davis, J. H., & Schoorman, F. D. (1995). An Integrative Model of Organizational Trust. Academy of Management Review, 20(3), 709–734.
McAllister, D. J. (1995). Affect- and Cognition-Based Trust as Foundations for Interpersonal Cooperation in Organizations. Academy of Management Journal, 38(1).
McGovern, A., Lagerquist, R., Gagne, D. J., Jergensen, G. E., Elmore, K. L., Homeyer, C. R., & Smith, T. (2019). Making the black box more transparent: Understanding the physical implications of machine learning. Bulletin of the American Meteorological Society.
Miller, T. (2019). Explanation in artificial intelligence: Insights from the social sciences.
In Artificial Intelligence.
Molnar, C. (2019). Interpretable Machine Learning. A Guide for Making Black Box Models Explainable. Book, 247. https://christophm.github.io/interpretable-ml-book
Nguyen, A., Yosinski, J., & Clune, J. (2015). Deep neural networks are easily fooled: High confidence predictions for unrecognizable images. Proceedings of the IEEE
Computer Society Conference on Computer Vision and Pattern Recognition. Nicolae, M., Arikan, M., Deshpande, V., & Ferguson, M. (2017). Do bags fly free? An empirical analysis of the operational implications of airline baggage fees. In Management Science.
Nunes, I., & Jannach, D. (2017). A systematic review and taxonomy of explanations in decision support and recommender systems. User Modeling and User-Adapted
Interaction, 27(3–5), 393–444.
Pandey, A. V., Manivannan, A., Nov, O., Satterthwaite, M., & Bertini, E. (2014). The persuasive power of data visualization. IEEE Transactions on Visualization and
Computer Graphics, 20(12), 2211–2220.
Panniello, U., Gorgoglione, M., & Tuzhilin, A. (2016). In CARSs we trust: How context-aware recommendations affect customers’ trust and other business performance measures of recommender systems. Information Systems Research, 27(1).
Parasuraman, R., & Riley, V. (1997). Humans and automation: Use, misuse, disuse, abuse. Human Factors.
Perrotta, F., Parry, T., & Neves, L. C. (2017). Application of machine learning for fuel consumption modelling of trucks. Proceedings - 2017 IEEE International Conference on Big Data, Big Data 2017.
Pieters, W. (2011). Explanation and trust: What to tell the user in security and AI? Ethics and Information Technology, 13(1).
Pizer, S. M., Amburn, E. P., Austin, J. D., Cromartie, R., Geselowitz, A., Greer, T., ter Haar Romeny, B., Zimmerman, J. B., & Zuiderveld, K. (1987). Adaptive histogram equalization and its variations. Computer Vision, Graphics, and Image Processing, 38(1), 99.
Poursabzi-Sangdeh, F., Goldstein, D. G., & Hofman, J. M. (2021). Manipulating and measuring model interpretability. In Conference on Human Factors in Computing Systems - Proceedings.
Power, D. J. (2002). Decision Support Systems: Concepts and Resources for Managers. In Information Systems Management (Vol. 20, Issue 4).
Putnam, V., & Conati, C. (2019). Exploring the need for explainable artificial intelligence (XAI) in intelligent tutoring systems (ITS). CEUR Workshop Proceedings.
Quinlan, J. R. (1987). Simplifying decision trees. International Journal of Man- Machine Studies.
Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). “Why should i trust you?” Explaining the predictions of any classifier. Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144.
Robertson, G., Fernandez, R., Fisher, D., Lee, B., & Stasko, J. (2008). Effectiveness of animation in trend visualization. IEEE Transactions on Visualization and Computer Graphics, 14(6), 1325–1332.
Rose, J. M., Hensher, D. A., Greene, W. H., & Washington, S. P. (2012). Attribute exclusion strategies in airline choice: Accounting for exogenous information on decision maker processing strategies in models of discrete choice.
Transportmetrica.
Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence, 1(5), 206–215.
Samek, W., Wiegand, T., & Müller, K.-R. (2017). Explainable Artificial Intelligence: Understanding, Visualizing and Interpreting Deep Learning Models.
http://arxiv.org/abs/1708.08296
Sautto, J. M. (2014). Decision Support Systems for Business Intelligence, 2nd edition. In Investigación Operacional (Vol. 35, Issue 1).
Saxena, R., & Srinivasan, A. (2013). Business intelligence. In International Series in
Operations Research and Management Science.
Shafiei, F., & Sundaram, D. (2004). Multi-enterprise collaborative enterprise resource planning and decision support systems. Proceedings of the Hawaii International
Conference on System Sciences, 37.
Simonyan, K., Vedaldi, A., & Zisserman, A. (2014). Deep inside convolutional networks: Visualising image classification models and saliency maps. 2nd International Conference on Learning Representations, ICLR 2014 - Workshop Track Proceedings.
Swartout, W. R. (1983). XPLAIN: a system for creating and explaining expert consulting programs. Artificial Intelligence.
Touchette, P. E., MacDonald, R. F., & Langer, S. N. (1985). A scatter plot for identifying stimulus control of problem behavior. Journal of Applied Behavior Analysis, 18(4).
Trani, A. A., Wing-Ho, F. C., Schilling, G., Baik, H., & Seshadri, A. (2004). A neural network model to estimate aircraft fuel consumption. Collection of Technical Papers - AIAA 4th Aviation Technology, Integration, and Operations Forum, ATIO.
van Wijk, J. J., & van de Wetering, H. (1999). Cushion treemaps: visualization of hierarchical information. Proceedings of the IEEE Symposium on Information
Visualization.
Vassiliades, A., Bassiliades, N., & Patkos, T. (2021). Argumentation and explainable artificial intelligence: A survey. In Knowledge Engineering Review.
Veale, M., Van Kleek, M., & Binns, R. (2018). Fairness and accountability design needs for algorithmic support in high-stakes public sector decision-making. Conference on Human Factors in Computing Systems - Proceedings, 2018-April. Vilone, G., & Longo, L. (2020). Explainable Artificial Intelligence: a Systematic Review. Dl. http://arxiv.org/abs/2006.00093
Wang, D., Yang, Q., Abdul, A., Lim, B. Y., & States, U. (2019). Designing Theory-Driven User-Centric Explainable AI. 1–15.
Wang, J., Gou, L., Yang, H., & Shen, H. W. (2018). GANViz: A Visual Analytics Approach to Understand the Adversarial Game. IEEE Transactions on Visualization and Computer Graphics, 24(6).
Wang, N., Pynadath, D. V., & Hill, S. G. (2016). Trust calibration within a human-robot team: Comparing automatically generated explanations. ACM/IEEE
International Conference on Human-Robot Interaction.
Wong, W. H., Zhang, A., Van Hui, Y., & Leung, L. C. (2009). Optimal baggage-limit policy: Airline passenger and cargo allocation. Transportation Science, 43(3),
355–369.
Xia, M., Asano, Y., Williams, J. J., Qu, H., & Ma, X. (2020). Using Information Visualization to Promote Students’ Reflection on “gaming the System” in Online
Learning. L@S 2020 - Proceedings of the 7th ACM Conference on Learning @ Scale, 37–49.
Yagoda, R. E., & Gillan, D. J. (2012). You Want Me to Trust a ROBOT? The Development of a Human-Robot Interaction Trust Scale. International Journal of
Social Robotics, 4(3).
Yang, F., Huang, Z., Scholtz, J., & Arendt, D. L. (2020). How do visual explanations foster end users’ appropriate trust in machine learning? International Conference
on Intelligent User Interfaces, Proceedings IUI.
Yu, K., Taib, R., Berkovsky, S., Zhou, J., Conway, D., & Chen, F. (2016). Trust and Reliance based on system accuracy. UMAP 2016 - Proceedings of the 2016
Conference on User Modeling Adaptation and Personalization.
Yur, E., & Vasil, V. (2013). Analytical Review of Data Visualization Methods in Application to Big Data. Journal of Electrical and Computer Engineering, 2013, Article ID 969458.
Zhang, Y., Vera Liao, Q., & Bellamy, R. K. E. (2020). Effect of confidence and explanation on accuracy and trust calibration in AI-assisted decision making.
FAT* 2020 - Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency.
zh_TW
dc.identifier.doi (DOI) 10.6814/NCCU202101328en_US