學術產出-Theses

Article View/Open

Publication Export

Google ScholarTM

政大圖書館

Citation Infomation

  • No doi shows Citation Infomation
題名 促進信任的XAI:科技壓力下人工智慧解釋方法的影響分析
XAI for Trust: Analyzing the Impact of Explanation Methods under Technostress
作者 鄭安佑
Zheng, An-You
貢獻者 周致遠
Chou, Chih-Yuan
鄭安佑
Zheng, An-You
關鍵詞 人工智慧
可解釋性
科技壓力
分心衝突理論
認知負荷
主導反應
Artificial Intelligence
Explainability
Technostress
Distraction-Conflict Theory
Cognitive Load
Dominant Response
日期 2024
上傳時間 1-Feb-2024 10:56:44 (UTC+8)
摘要 隨著資通訊技術(ICTs)的持續發展,許多使用者也不經意地感受到水漲船高的壓力,我們通常稱之為「科技壓力」。這種與科技互動而產生的壓力,常以生產力下降、疲勞、與焦慮等負面效應展現。同時,人工智慧(AI)作為一種迅速發展的新興科技,一系列特殊的挑戰也應運而生。儘管在組織的決策過程中,AI的使用越來越廣泛,但AI自身決策機制並不透明,這種「黑箱」式決策降低了使用者對其的信任感,且產生的決策是否具有危險性也不得而知。有鑑於此,為了幫助AI的決策過程更加清晰,以提高使用者對其的信任,AI的可解釋性日益重要。然而,透過具可解釋性的AI(XAI)所獲得的終端使用者之信任感,是否會因為人們感受到的科技壓力而有所削弱,是一項值得討論的研究議題。因此,本研究透過分心衝突理論與認知負荷的角度,深入探討何種類別的XAI的解釋方法所帶來的信任會因為終端用戶既有的科技壓力而有所削弱。同時透過實驗與統計分析,檢測不同解釋方法對於信任的關係,以及科技壓力對該關係的調節作用。根據本研究之結果,我們發現科技壓力不會影響最終用戶對人工智慧解釋的信任,但科技壓力的存在本身與對AI的信任有顯著的負向關係。我們期許本研究可以為將來研究與實務上的應用提供一些啟發與參考。
With the continuous development of information and communication technologies (ICTs), many users inadvertently experience increasing pressure, often referred to as 'technostress'. This type of stress, arising from interactions with technology, is commonly manifested in negative effects such as decreased productivity, fatigue, and anxiety. Concurrently, artificial intelligence (AI), as a rapidly evolving technology, presents a series of unique challenges. Despite the growing use of AI in organizational decision-making processes, the opaque nature of AI decision-making mechanisms, often described as "black-box" decisions, undermines user trust. The potential risks associated with these AI-generated decisions are also uncertain. In light of this, the importance of AI explainability has grown significantly to enhance user trust in AI decision-making processes. However, whether the trust in explainable AI (XAI) among end-users is diminished due to perceived technostress is a research topic worthy of discussion. Therefore, this study delves into how certain types of XAI explanations might be less effective due to the existing technostress of end-users, exploring this through the lens of distraction-conflict theory and cognitive load. It also examines, through experiments and statistical analysis, the relationship between different explanatory methods and trust, as well as the moderating role of technostress in this relationship. Based on the results of this study, we found that technostress does not impact the end users' trust in AI explanations. However, the presence of technostress itself is significantly negatively related to trust in AI. We hope that this research can provide inspiration and references for future studies and practical applications.
參考文獻 Abdul, A., Vermeulen, J., Wang, D., Lim, B. Y., & Kankanhalli, M. S. (2018). Trends and Trajectories for Explainable, Accountable and Intelligible Systems: An HCI Research Agenda. In Proceeding of the 2018 CHI Conference on Human Factors in Computing Systems, 1–18. https://doi.org/10.1145/3173574.3174156 Abdul-Gader, A. H., & Kozar, K. A. (1995). The Impact of Computer Alienation on Information Technology Investment Decisions: An Exploratory Cross-National Analysis. Management Information Systems Quarterly, 19(4), 535–559. https://doi.org/10.2307/249632 Åborg, C., & Billing, A. (2003). Health Effects of ‘The Paperless Office’ – Evaluations of the Introduction of Electronic Document Handling Systems. Behaviour & Information Technology, 22(6), 389–396. https://doi.org/10.1080/01449290310001624338 Adadi, A., & Berrada, M. (2018). Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI). IEEE Access, 6, 52138–52160. https://doi.org/10.1109/ACCESS.2018.2870052 Aggarwal, A., Lohia, P., Nagar, S., Dey, K., & Saha, D. (2019). Black Box Fairness Testing of Machine Learning Models. In Proceeding of the 27th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering, 625–635. https://doi.org/10.1145/3338906.3338937 Alicioglu, G., & Sun, B. (2021). A Survey of Visual Analytics for Explainable Artificial Intelligence Methods. Computers & Graphics, 102, 502–520. https://doi.org/10.1016/j.cag.2021.09.002 Alloway, T. P. (2006). How Does Working Memory Work in the Classroom? Educational Research and Reviews, 1(4), 134–139. Amat, B. (2021). Analysis of the Impact of Artificial Intelligence on Our Lives. The Frontiers of Society, Science and Technology, 3(7), 45–50. https://doi.org/10.25236/FSST.2021.030709 Araujo, T., Helberger, N., Kruikemeier, S., & de Vreese, C. H. (2020). In AI we trust? Perceptions about automated decision-making by artificial intelligence. AI & Society, 35(3), 611–623. https://doi.org/10.1007/s00146-019-00931-w Arrieta, A. B., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., García, S., Gil-Lopez, S., Molina, D., Benjamins, R., Chatila, R., Francisco Herrera, Herrera, F., & Herrera, F. S. (2020). Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI. Information Fusion, 58, 82–115. https://doi.org/10.1016/j.inffus.2019.12.012 Arya, V., Bellamy, R. K., Chen, P.-Y., Dhurandhar, A., Hind, M., Hoffman, S. C., Houde, S., Liao, Q. V., Luss, R., Mojsilović, A., Mourad, S., Pedemonte, P., Raghavendra, R., Richards, J., Sattigeri, P., Shanmugam, K., Singh, M., Varshney, K. R., Wei, D., & Zhang, Y. (2019). One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques. arXiv: Artificial Intelligence. https://doi.org/10.48550/arXiv.1909.03012 Ayyagari, R., Grover, V., & Purvis, R. (2011). Technostress: Technological Antecedents and Implications. Management Information Systems Quarterly, 35(4), 831–858. https://doi.org/10.2307/41409963 Baddeley, A. (1992). Working Memory: The Interface between Memory and Cognition. Journal of Cognitive Neuroscience, 4(3), 281–288. Bannert, M. (2002). Managing Cognitive Load—Recent Trends in Cognitive Load Theory. Learning and Instruction, 12(1), 139–146. Baron, R. S. (1986). Distraction-Conflict Theory: Progress and Problems. Advances in Experimental Social Psychology, 19, 1–40. https://doi.org/10.1016/S0065-2601(08)60211-7 Baumeister, R. F., & Bushman, B. J. (2020). Social psychology and human nature. Cengage Learning. Becker, J. T., & Morris, R. G. (1999). Working Memory(s). Brain and Cognition, 41(1), 1–8. Benitez, J. M., Castro, J. L., & Requena, I. (1997). Are Artificial Neural Networks Black Boxes? IEEE Transactions on Neural Networks, 8(5), 1156–1164. https://doi.org/10.1109/72.623216 Benjamin, D. J., Brown, S. A., & Shapiro, J. M. (2013). Who is ‘Behavioral’? Cognitive Ability and Anomalous Preferences. Journal of the European Economic Association, 11(6), 1231–1255. https://doi.org/10.2139/ssrn.675264 Bohner, G., Ruder, M., Erb, H.-P., & Erb, H.-P. (2002). When Expertise Backfires: Contrast and Assimilation Effects in Persuasion. British Journal of Social Psychology, 41(4), 495–519. https://doi.org/10.1348/014466602321149858 Bologna, G., & Hayashi, Y. (2017). Characterization of Symbolic Rules Embedded in Deep DIMLP Networks: A Challenge to Transparency of Deep Learning. Journal of Artificial Intelligence and Soft Computing Research, 7(4), 265–286. https://doi.org/10.1515/jaiscr-2017-0019 Brillhart, P. E. (2004). Technostress in the Workplace: Managing Stress in the Electronic Workplace. Journal of American Academy of Business, 5(1/2), 302–307. Brod, C. (1982). Managing Technostress: Optimizing the Use of Computer Technology. The Personnel Journal, 61(10), 753–757. Brod, C. (1984). Technostress: The human cost of the computer revolution. Addison-Wesley. Brosnan, M. (1998). Technophobia: The psychological impact of information technology. Routledge. Brown, R., Miller, G. A., Galanter, E., & Pribram, K. H. (1960). Plans and the Structure of Behavior. Language, 36(4), 527–532. https://doi.org/10.2307/411065 Cahour, B., & Forzy, J.-F. (2009). Does Projection Into Use Improve Trust and Exploration? An Example With a Cruise Control System. Safety Science, 47(9), 1260–1270. https://doi.org/10.1016/j.ssci.2009.03.015 Carabantes, M. (2020). Black-box Artificial Intelligence: An Epistemological and Critical Analysis. AI & SOCIETY, 35(2), 309–317. https://doi.org/10.1007/s00146-019-00888-w Castets-Renard, C. (2019). Accountability of Algorithms in the GDPR and beyond: A European Legal Framework on Automated Decision-Making. Fordham Intellectual Property, Media & Entertainment Law Journal, 30(1), 91–138. https://doi.org/10.2139/ssrn.3391266 Chandler, P., & Sweller, J. (1996). Cognitive Load While Learning to Use a Computer Program. Applied Cognitive Psychology, 10(2), 151–170. Chen, F., & National Information Communication Technology Australia Ltd Eveleigh. (2011). Effects of cognitive load on trust. Nat. ICT Australia Limited. Chico, V. (2018). The Impact of the General Data Protection Regulation on Health Research. British Medical Bulletin, 128(1), 109–118. https://doi.org/10.1093/bmb/ldy038 Colavita, F. B., & Weisberg, D. (1979). A Further Investigation of Visual Dominance. Perception & Psychophysics, 25(4), 345–347. Compeau, D., Higgins, C. A., & Huff, S. L. (1999). Social Cognitive Theory and Individual Reactions to Computing Technology: A Longitudinal Study. Management Information Systems Quarterly, 23(2), 145–158. https://doi.org/10.2307/249749 Confalonieri, R., Coba, L., Wagner, B., & Besold, T. R. (2021). A Historical Perspective of Explainable Artificial Intelligence. WIREs Data Mining and Knowledge Discovery, 11(1), e1391. https://doi.org/10.1002/widm.1391 Conway, A. R. A., Cowan, N., & Bunting, M. F. (2001). The Cocktail Party Phenomenon Revisited: The Importance of Working Memory Capacity. Psychonomic Bulletin & Review, 8(2), 331–335. https://doi.org/10.3758/BF03196169 Cooper, C. L., Dewe, P., & O’Driscoll, M. P. (2001). Organizational stress: A review and critique of theory, research, and applications. Sage. Cooper, G. (1990). Cognitive Load Theory as an Aid for Instructional Design. Australasian Journal of Educational Technology, 6(2), 108–113. Cowan, N. (2008). What Are the Differences Between Long-Term, Short-Term, and Working Memory? Progress in Brain Research, 169, 323–338. https://doi.org/10.1016/s0079-6123(07)00020-9 Davenport, T. H., & Harris, J. G. (2005). Automated Decision Making Comes of Age. MIT Sloan Management Review, 46(4), 83–89. de Sousa, W. G., de Melo, E. R. P., de Souza Bermejo, P. H., Farias, R. A. S., & de Oliveira Gomes, A. (2019). How and Where is Artificial Intelligence in the Public Sector Going? A Literature Review and Research Agenda. Government Information Quarterly, 36(4), 101392. https://doi.org/10.1016/j.giq.2019.07.004 DeCamp, M., & Tilburt, J. C. (2019). Why We Cannot Trust Artificial Intelligence in Medicine. The Lancet Digital Health, 1(8), e390. https://doi.org/10.1016/S2589-7500(19)30197-9 DeMaagd, G. R. (1983). Management Information Systems. Management Accounting, 65(4), 10–71. Demajo, L. M., Vella, V., & Dingli, A. (2020). Explainable AI for Interpretable Credit Scoring. Computer Science & Information Technology (CS & IT), 185–203. https://doi.org/10.5121/csit.2020.101516 D’Esposito, M. (2007). From Cognitive to Neural Models of Working Memory. Philosophical Transactions of the Royal Society B: Biological Sciences, 362(1481), 761–772. Dobrescu, E. M., & Dobrescu, E. M. (2018). Artificial Intelligence (AI)—The Technology That Shapes The World. 6(2), 71–81. Dong, Q., & Howard, T. (2006). Emotional Intelligence, Trust and Job Satisfaction. Competition Forum, 4(2), 381–388. Druce, J., Harradon, M., & Tittle, J. (2021). Explainable Artificial Intelligence (XAI) for Increasing User Trust in Deep Reinforcement Learning Driven Autonomous Systems (arXiv preprint arXiv:2106.03775). https://doi.org/10.48550/ARXIV.2106.03775 Duell, J. A. (2021). A Comparative Approach to Explainable Artificial Intelligence Methods in Application to High-Dimensional Electronic Health Records: Examining the Usability of XAI (arXiv preprint arXiv:2103.04951). https://doi.org/10.48550/ARXIV.2103.04951 Duffy, S., & Smith, J. (2014). Cognitive Load in the Multi-Player Prisoner’s Dilemma Game: Are There Brains in Games? Journal of Behavioral and Experimental Economics, 51, 47–56. https://doi.org/10.2139/ssrn.1841523 Dzindolet, M. T., Peterson, S. A., Pomranky, R. A., Pierce, L. G., & Beck, H. P. (2003). The Role of Trust in Automation Reliance. International Journal of Human-Computer Studies, 58(6), 697–718. https://doi.org/10.1016/S1071-5819(03)00038-7 Elfering, A., Grebner, S., K. Semmer, N., Kaiser‐Freiburghaus, D., Lauper‐Del Ponte, S., & Witschi, I. (2005). Chronic Job Stressors and Job Control: Effects on Event‐Related Coping Success and Well‐being. Journal of Occupational and Organizational Psychology, 78(2), 237–252. https://doi.org/10.1348/096317905X40088 Epley, N., Waytz, A., & Cacioppo, J. T. (2007). On Seeing Human: A Three-factor Theory of Anthropomorphism. Psychological Review, 114(4), 864–886. https://doi.org/10.1037/0033-295X.114.4.864 Ericsson, K. A., Ericsson, K. A., & Kintsch, W. (1995). Long-term Working Memory. Psychological Review, 102(2), 211–245. https://doi.org/10.1037/0033-295x.102.2.211 Esteva, A., Kuprel, B., Novoa, R. A., Ko, J. M., Swetter, S. M., Blau, H. M., & Thrun, S. (2017). Dermatologist-Level Classification of Skin Cancer With Deep Neural Networks. Nature, 542(7639), 115–118. https://doi.org/10.1038/nature21056 Eysenck, M. W. (1992). Anxiety: The Cognitive Perspective. Psychology Press. Fahrenkamp-Uppenbrink, J. (2019). What Good is a Black Box? Science, 364(6435), 38.16-40. Främling, K. (2020). Explainable AI without Interpretable Model (arXiv preprint arXiv:2009.13996). https://doi.org/10.48550/ARXIV.2009.13996 Gefen, D., Straub, D., & Boudreau, M.-C. (2000). Structural Equation Modeling and Regression: Guidelines for Research Practice. Communications of the Association for Information Systems, 4. https://doi.org/10.17705/1CAIS.00407 Ghosh, S., & Singh, A. (2020). The Scope of Artificial Intelligence in Mankind: A Detailed Review. Journal of Physics: Conference Series, 1531, 012045. https://doi.org/10.1088/1742-6596/1531/1/012045 Gilbert, D. T. (1989). Thinking lightly about others: Automatic components of the social inference process. In Unintended thought (pp. 189–211). The Guilford Press. Gilbert, D. T., Pelham, B. W., & Krull, D. S. (1988). On Cognitive Busyness: When Person Perceivers Meet Persons Perceived. Journal of Personality and Social Psychology, 54(5), 733–740. https://doi.org/10.1037/0022-3514.54.5.733 Ginns, P., & Leppink, J. (2019). Special Issue on Cognitive Load Theory: Editorial. Educational Psychology Review, 31(2), 255–259. Goldman, C. S., & Wong, E. H. (1997). Stress and the College Student. Education 3-13, 117(4), 604–611. Goodman, B., & Flaxman, S. (2017). European Union Regulations on Algorithmic Decision-Making and a “Right to Explanation”. AI Magazine, 38(3), 50–57. https://doi.org/10.1609/aimag.v38i3.2741 Gunning, D., Stefik, M., Choi, J., Miller, T., Stumpf, S., & Yang, G.-Z. (2019). XAI—Explainable Artificial Intelligence. Science Robotics, 4(37), eaay7120. Gupta, K., Hajika, R., Pai, Y. S., Duenser, A., Lochner, M., & Billinghurst, M. (2019). In AI We Trust: Investigating the Relationship between Biosignals, Trust and Cognitive Load in VR. Proceedings of the 25th ACM Symposium on Virtual Reality Software and Technology, 1–10. https://doi.org/10.1145/3359996.3364276 Gyasi, E. A., Handroos, H., & Kah, P. (2019). Survey on Artificial Intelligence (AI) Applied in Welding: A Future Scenario of the Influence of AI on Technological, Economic, Educational and Social Changes. Procedia Manufacturing, 38, 702–714. https://doi.org/10.1016/j.promfg.2020.01.095 Haldorai, A., Murugan, S., Umamaheswari, K., & Ramu, A. (2020). Evolution, Challenges, and Application of Intelligent ICT Education: An Overview. Computer Applications in Engineering Education, 29(3), 562–571. https://doi.org/10.1002/cae.22217 Haleem, A., Javaid, M., & Khan, I. H. (2019). Current Status and Applications of Artificial Intelligence (AI) in Medical Field: An Overview. Current Medicine Research and Practice, 9(6), 231–237. https://doi.org/10.1016/j.cmrp.2019.11.005 Hamet, P., & Tremblay, J. (2017). Artificial Intelligence in Medicine. Metabolism, 69, S36–S40. https://doi.org/10.1016/j.metabol.2017.01.011 Haqiq, N., Zaim, M., Bouganssa, I., Salbi, A., & Sbihi, M. (2022). AIoT with I4.0: The Effect of Internet of Things and Artificial Intelligence Technologies on the Industry 4.0. ITM Web of Conferences, 46, 03002. EDP Sciences. https://doi.org/10.1051/itmconf/20224603002 Harkut, D. G., & Kasat, K. (2019). Artificial intelligence—Scope and limitations. IntechOpen. https://doi.org/10.5772/intechopen.77611 Hayes-Roth, F., & Jacobstein, N. (1994). The State of Knowledge-Based Systems. Communications of the ACM, 37(3), 26–39. https://doi.org/10.1145/175247.175249 Heinssen, R. K., Glass, C. R., & Knight, L. A. (1987). Assessing Computer Anxiety: Development and Validation of the Computer Anxiety Rating Scale. Computers in Human Behavior, 3(1), 49–59. https://doi.org/10.1016/0747-5632(87)90010-0 Herm, L.-V. (2023). Impact of Explainable AI on Cognitive Load: Insights From an Empirical Study. In Proceeding of the 31st European Conference on Information Systems (ECIS 2023). https://doi.org/10.48550/arxiv.2304.08861 Hernandez, I., & Preston, J. L. (2013). Disfluency Disrupts the Confirmation Bias. Journal of Experimental Social Psychology, 49(1), 178–182. https://doi.org/10.1016/j.jesp.2012.08.010 Hinson, J. M., Jameson, T. L., & Whitney, P. (2002). Somatic Markers, Working Memory, and Decision Making. Cognitive, Affective, & Behavioral Neuroscience, 2(4), 341–353. https://doi.org/10.3758/cabn.2.4.341 Hockey, G. R. J. (1997). Compensatory Control in the Regulation of Human Performance Under Stress and High Workload; A Cognitive-Energetical Framework. Biological Psychology, 45(1), 73–93. https://doi.org/10.1016/s0301-0511(96)05223-4 Hoffman, R. R., Mueller, S. T., Klein, G., & Litman, J. (2023). Measures for Explainable AI: Explanation Goodness, User Satisfaction, Mental Models, Curiosity, Trust, and Human-AI Performance. Frontiers of Computer Science, 5, 1096257. https://doi.org/10.3389/fcomp.2023.1096257 Holzinger, A., Langs, G., Denk, H., Zatloukal, K., & Müller, H. (2019). Causability and Explainability of Artificial Intelligence in Medicine. WIREs Data Mining and Knowledge Discovery, 9(4), e1312. https://doi.org/10.1002/widm.1312 Ibrahim, R. Z. A. R., Bakar, A. A., & Nor, S. B. M. (2007). Techno Stress: A Study Among Academic and Non Academic Staff. Lecture Notes in Computer Science, 4566, 118–124. https://doi.org/10.1007/978-3-540-73333-1_15 Islam, M. R., Ahmed, M. U., Barua, S., & Begum, S. (2022). A Systematic Review of Explainable Artificial Intelligence in Terms of Different Application Domains and Tasks. Applied Sciences, 12(3), 1353. https://doi.org/10.3390/app12031353 Janson, J., & Rohleder, N. (2017). Distraction Coping Predicts Better Cortisol Recovery After Acute Psychosocial Stress. Biological Psychology, 128, 117–124. https://doi.org/10.1016/j.biopsycho.2017.07.014 Jex, S. M., & Beehr, T. A. (1991). Emerging Theoretical and Methodological Issues in the Study of Work-related Stress. Research in Personnel and Human Resources Management, 9(31), l–365. Jiménez-Luna, J., Grisoni, F., & Schneider, G. (2020). Drug Discovery With Explainable Artificial Intelligence. Nature Machine Intelligence, 2(10), 573–584. https://doi.org/10.1038/s42256-020-00236-4 Kalyuga, S. (2011). Cognitive Load Theory: Implications for Affective Computing. In Proceeding of the 24th International FLAIRS Conference. Kaplan, A., & Haenlein, M. (2019). Siri, Siri, in My Hand: Who’s the Fairest in the Land? On the Interpretations, Illustrations, and Implications of Artificial Intelligence. Business Horizons, 62(1), 15–25. https://doi.org/10.1016/j.bushor.2018.08.004 Karran, A. J., Demazure, T., Hudon, A., Senecal, S., & Léger, P.-M. (2022). Designing for Confidence: The Impact of Visualizing Artificial Intelligence Decisions. Frontiers in Neuroscience, 16, 883385. https://doi.org/10.3389/fnins.2022.883385 Kassin, S. M., Fein, S., & Markus, H. R. (2011). Social psychology (8th ed). Sage. Katzir, M., & Posten, A.-C. (2023). Are There Dominant Response Tendencies for Social Reactions? Trust Trumps Mistrust—Evidence From a Dominant Behavior Measure (DBM). Journal of Personality and Social Psychology, 125(1), 57–81. https://doi.org/10.1037/pspa0000334 Kaun, A. (2022). Suing the Algorithm: The Mundanization of Automated Decision-Making in Public Services Through Litigation. Information, Communication & Society, 25(14), 2046–2062. https://doi.org/10.1080/1369118X.2021.1924827 Kirsh, D. (2000). A Few Thoughts on Cognitive Overload. Intellectica, 30(1), 19–51. https://doi.org/10.3406/intel.2000.1592 Klingberg, T. (2008). The overflowing brain: Information overload and the limits of working memory. Oxford University Press. https://doi.org/10.5860/choice.46-5905 Koh, P. W., & Liang, P. (2017). Understanding Black-Box Predictions via Influence Functions. In Proceeding of the 34th International Conference on Machine Learning, 1885–1894. PMLR. Kraus, J., Scholz, D., Stiegemeier, D., & Baumann, M. (2020). The More You Know: Trust Dynamics and Calibration in Highly Automated Driving and the Effects of Take-Overs, System Malfunction, and System Transparency: Human Factors, 62(5), 718–736. https://doi.org/10.1177/0018720819853686 Kulesza, T., Stumpf, S., Burnett, M., Yang, S., Kwan, I., & Wong, W.-K. (2013). Too Much, Too Little, or Just Right? Ways Explanations Impact End Users’ Mental Models. In Joint Proceedings of the 2013 IEEE Symposium on Visual Languages and Human Centric Computing, 3–10. IEEE. https://doi.org/10.1109/VLHCC.2013.6645235 Kumain, S. C., Kumain, K., & Chaudhary, P. (2020). AI Impact on Various Domain: An Overview. International Journal of Management, 11(190), 1433–1439. Larmuseau, C., Coucke, H., Kerkhove, P., Desmet, P., & Depaepe, F. (2019). Cognitive Load During Online Complex Problem-Solving in a Teacher Training Context. In Proceeding of the European Distance and E-Learning Network Conference Proceedings (EDEN 2019), 466–474. Laudon, K. C., & Laudon, J. P. (2022). Management information systems: Managing the digital firm (17th edition). Pearson. Lazarus, R. S., & Folkman, S. (1987). Transactional Theory and Research on Emotions and Coping. European Journal of Personality, 1(3), 141–169. https://doi.org/10.1002/per.2410010304 Lee, J. (2018). Task Complexity, Cognitive Load, and L1 Speech. Applied Linguistics, 40(3), 506–539. Li, H., Yu, L., & He, W. (2019). The Impact of GDPR on Global Technology Development. Journal of Global Information Technology Management, 22(1), 1–6. https://doi.org/10.1080/1097198X.2019.1569186 Liu, C.-F., Chen, Z.-C., Kuo, S.-C., & Lin, T.-C. (2022). Does AI Explainability Affect Physicians’ Intention to Use AI? International Journal of Medical Informatics, 168, 104884. https://doi.org/10.1016/j.ijmedinf.2022.104884 Lundberg, S. M., & Lee, S.-I. (2017). A Unified Approach to Interpreting Model Predictions. In Proceeding of the 31st International Conference on Neural Information Processing Systems, 30, 4768–4777. Madsen, M., & Gregor, S. (2000). Measuring Human-Computer Trust. In Proceeding of the 11st Australasian Conference on Information Systems, 53, 6–8. Mahalakshmi, S., & Latha, R. (2019). Artificial Intelligence with the Internet of Things on Healthcare systems: A Survey. International Journal of Advanced Trends in Computer Science and Engineering, 8(6), 2847–2854. https://doi.org/10.30534/ijatcse/2019/27862019 Mahapatra, M., & Pillai, R. (2018). Technostress in Organizations: A Review of Literature. In Proceeding of the 26th European Conference on Information Systems (ECIS 2018), 99. Marcoulides, G. A. (1989). Measuring Computer Anxiety: The Computer Anxiety Scale. Educational and Psychological Measurement, 49(3), 733–739. https://doi.org/10.1177/001316448904900328 Markus, A. F., Kors, J. A., & Rijnbeek, P. R. (2021). The Role of Explainability in Creating Trustworthy Artificial Intelligence for Health Care: A Comprehensive Survey of the Terminology, Design Choices, and Evaluation Strategies. Journal of Biomedical Informatics, 113, 103655. https://doi.org/10.1016/j.jbi.2020.103655 Mazmanian, M., Orlikowski, W. J., & Yates, J. (2013). The Autonomy Paradox: The Implications of Mobile Email Devices for Knowledge Professionals. Organization Science, 24(5), 1337–1357. https://doi.org/10.1287/orsc.1120.0806 Merry, M., Riddle, P., & Warren, J. (2021). A Mental Models Approach for Defining Explainable Artificial Intelligence. BMC Medical Informatics and Decision Making, 21(1), 344. https://doi.org/10.1186/s12911-021-01703-7 Mishra, S., Prakash, M., Hafsa, A., & Anchana, G. (2018). Anfis to Detect Brain Tumor Using MRI. International Journal of Engineering and Technology, 7(3.27), 209–214. https://doi.org/10.14419/ijet.v7i3.27.17763 Mohammadi, B., Malik, N., Derdenger, T., & Srinivasan, K. (2022). Sell Me the Blackbox! Regulating eXplainable Artificial Intelligence (XAI) May Harm Consumers (arXiv preprint arXiv:2209.03499). https://doi.org/10.48550/ARXIV.2209.03499 Montavon, G., Samek, W., Samek, W., & Müller, K.-R. (2018). Methods for Interpreting and Understanding Deep Neural Networks. Digital Signal Processing, 73, 1–15. https://doi.org/10.1016/j.dsp.2017.10.011 Nagahisarchoghaei, M., Nur, N., Cummins, L., Nur, N., Karimi, M. M., Nandanwar, S., Bhattacharyya, S., & Rahimi, S. (2023). An Empirical Survey on Explainable AI Technologies: Recent Trends, Use-Cases, and Categories from Technical and Application Perspectives. Electronics, 12(5), 1092. https://doi.org/10.3390/electronics12051092 Nunnally, J. (1978). Psychometric theory (2nd ed). McGraw-Hill. Oei, N. Y. L., Everaerd, W. T., Elzinga, B. M., van Well, S., & Bermond, B. (2006). Psychosocial Stress Impairs Working Memory at High Loads: An Association With Cortisol Levels and Memory Retrieval. Stress, 9(3), 133–141. https://doi.org/10.1080/10253890600965773 Oei, N. Y. L., Veer, I. M., Wolf, O. T., Spinhoven, P., Rombouts, S. A. R. B., & Elzinga, B. M. (2011). Stress Shifts Brain Activation Towards Ventral ‘Affective’ Areas During Emotional Distraction. Social Cognitive and Affective Neuroscience, 7(4), 403–412. https://doi.org/10.1093/scan/nsr024 Palacio, S., Lucieri, A., Munir, M., Ahmed, S., Hees, J., & Dengel, A. (2021). XAI Handbook: Towards a Unified Framework for Explainable AI. In Proceeding of the IEEE/CVF International Conference on Computer Vision, 3766–3775. https://doi.org/10.1109/iccvw54120.2021.00420 Parayitam, S., & Dooley, R. S. (2009). The Interplay Between Cognitive- And Affective Conflict and Cognition- And Affect-based Trust in Influencing Decision Outcomes. Journal of Business Research, 62(8), 789–796. Parsons, K. S., Warm, J. S., Warm, J. S., Nelson, W. T., Matthews, G., & Riley, M. A. (2007). Detection-Action Linkage in Vigilance: Effects on Workload and Stress. 51(19), 1291–1295. https://doi.org/10.1177/154193120705101902 Payrovnaziri, S. N., Chen, Z., Rengifo-Moreno, P., Miller, T., Bian, J., Chen, J. H., Liu, X., & He, Z. (2020). Explainable Artificial Intelligence Models Using Real-world Electronic Health Record Data: A Systematic Scoping Review. Journal of the American Medical Informatics Association, 27(7), 1173–1185. https://doi.org/10.1093/jamia/ocaa053 Pessin, J. (1933). The Comparative Effects of Social and Mechanical Stimulation on Memorizing. The American Journal of Psychology, 45(2), 263–270. https://doi.org/10.2307/1414277 Petkovic, D. (2023). It is Not ‘Accuracy Vs. Explainability’—We Need Both for Trustworthy AI Systems. IEEE Transactions on Technology and Society, 4(1), 46–53. https://doi.org/10.48550/arxiv.2212.11136 Pogash, R. M., Streufert, S., Streufert, S. C., Milton, S., & Hershey Medical Center Hershey PA Dept of Behavioral Science. (1983). Effects of Task Load, Task Speed and Cognitive Complexity on Satisfaction. Puri, R. (2015). Mindfulness as a Stress Buster in Organizational Set Up. International Journal of Education and Management Studies, 5(4), 366–368. Rabbitt, P. (1978). Hand Dominance, Attention, and the Choice Between Responses. Quarterly Journal of Experimental Psychology, 30(3), 407–416. Radue, E.-W., Weigel, M., Wiest, R., & Urbach, H. (2016). Introduction to Magnetic Resonance Imaging for Neurologists. Continuum: Lifelong Learning in Neurology, 22(5), 1379–1398. https://doi.org/10.1212/con.0000000000000391 Ragu-Nathan, T. S., Tarafdar, M., Ragu-Nathan, B. S., & Tu, Q. (2008). The Consequences of Technostress for End Users in Organizations: Conceptual Development and Empirical Validation. Information Systems Research, 19(4), 417–433. https://doi.org/10.1287/isre.1070.0165 Raio, C. M., Orederu, T. A., Palazzolo, L., Shurick, A. A., & Phelps, E. A. (2013). Cognitive Emotion Regulation Fails the Stress Test. Proceedings of the National Academy of Sciences, 110(37), 15139–15144. https://doi.org/10.1073/pnas.1305706110 Regan, D. T., & Cheng, J. B. (1973). Distraction and Attitude Change: A Resolution. Journal of Experimental Social Psychology, 9(2), 138–147. Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). ‘Why Should I Trust You?’: Explaining the Predictions of Any Classifier. In Proceeding of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144. https://doi.org/10.1145/2939672.2939778 Riedl, R., Kindermann, H., Auinger, A., & Javor, A. (2012). Technostress from a Neurobiological Perspective—System Breakdown Increases the Stress Hormone Cortisol in Computer Users. 4(2), 61–69. https://doi.org/10.1007/s12599-012-0207-7 Rosen, L. D., Sears, D. C., & Weil, M. M. (1993). Treating Technophobia: A Longitudinal Evaluation of the Computerphobia Reduction Program. Computers in Human Behavior, 9(1), 27–50. https://doi.org/10.1016/0747-5632(93)90019-o Rudnick, A. (2019). The Black Box Myth: Artificial Intelligence’s Threat Re-Examined. International Journal of Extreme Automation and Connectivity in Healthcare, 1(1), 1–3. https://doi.org/10.4018/IJEACH.2019010101 Rydval, O. (2012). The Causal Effect of Cognitive Abilities on Economic Behavior: Evidence from a Forecasting Task with Varying Cognitive Load. CERGE-EI Working Paper Series. https://doi.org/10.2139/ssrn.2046942 Salanova, M., Llorens, S., & Ventura, M. (2014). Technostress: The Dark Side of Technologies. The Impact of ICT on Quality of Working Life, 87–103. Dordrecht: Springer Netherlands. https://doi.org/10.1007/978-94-017-8854-0_6 Salimi, A., & Dadashpour, S. (2012). Task Complexity and Language Production Dilemmas (Robinson’s Cognition Hypothesis vs. Skehan’s Trade-off Model). Procedia - Social and Behavioral Sciences, 46, 643–652. Salo, M., Pirkkalainen, H., Chua, C. E. H., & Koskelainen, T. (2022). Formation and Mitigation of Technostress in the Personal Use of IT. Management Information Systems Quarterly, 46(2), 1073–1108. https://doi.org/10.25300/misq/2022/14950 Samek, W., Montavon, G., Vedaldi, A., Hansen, L. K., & Müller, K.-R. (Eds.). (2019). Explainable AI: Interpreting, explaining and visualizing deep learning (Vol. 11700). Springer. https://doi.org/10.1007/978-3-030-28954-6 Samson, K., & Kostyszyn, P. (2015). Effects of Cognitive Load on Trusting Behavior – An Experiment Using the Trust Game. PLOS ONE, 10(5), 1–10. https://doi.org/10.1371/journal.pone.0127680 Sanajou, N., Zohali, L., & Zabihi, F. (2017). Do Task Complexity Demands Influence the Learners’ Perception of Task Difficulty? International Journal of Applied Linguistics and English Literature, 6(6), 71–77. Sanders, G. S. (1981). Driven by Distraction: An Integrative Review of Social Facilitation Theory and Research. Journal of Experimental Social Psychology, 17(3), 227–251. Sanders, G. S., Baron, R. S., & Moore, D. L. (1978). Distraction and Social Comparison as Mediators of Social Facilitation Effects. Journal of Experimental Social Psychology, 14(3), 291–303. Sarason, I. G., & Sarason, I. G. (1980). Test Anxiety: Theory, Research, and Applications. L. Erlbaum Associates. Sartaj, B., Ankita, K., Prajakta, B., Sameer, D., Navoneel, C., & Swati, K. (2020). Brain Tumor Classification (MRI) [dataset]. https://doi.org/10.34740/kaggle/dsv/1183165 Scheerer, M., & Reussner, R. (2021). Reliability Prediction of Self-Adaptive Systems Managing Uncertain AI Black-Box Components. 2021 International Symposium on Software Engineering for Adaptive and Self-Managing Systems (SEAMS), 111–117. IEEE. https://doi.org/10.1109/SEAMS51251.2021.00024 Schmeck, A., Opfermann, M., van Gog, T., Paas, F., & Leutner, D. (2015). Measuring Cognitive Load With Subjective Rating Scales During Problem Solving: Differences Between Immediate and Delayed Ratings. Instructional Science, 43(1), 93–114. https://doi.org/10.1007/s11251-014-9328-3 Schoeffer, J., Machowski, Y., & Kuehl, N. (2021). A Study on Fairness and Trust Perceptions in Automated Decision Making. In Joint Proceedings of the ACM IUI 2021 Workshops, 170005. https://doi.org/10.5445/IR/1000130551 Schoofs, D., Wolf, O. T., & Smeets, T. (2009). Cold Pressor Stress Impairs Performance on Working Memory Tasks Requiring Executive Functions in Healthy Young Men. Behavioral Neuroscience, 123(5), 1066–1075. https://doi.org/10.1037/a0016980 Schwabe, L., & Wolf, O. T. (2010). Emotional Modulation of the Attentional Blink: Is There an Effect of Stress? Emotion, 10(2), 283–288. https://doi.org/10.1037/a0017751 Shabani, M., & Marelli, L. (2019). Re‐Identifiability of Genomic Data and the GDPR: Assessing the Re‐identifiability of Genomic Data in Light of the EU General Data Protection Regulation. EMBO Reports, 20(6), e48316. https://doi.org/10.15252/embr.201948316 Shackman, A. J., Maxwell, J. S., McMenamin, B. W., Greischar, L. L., & Davidson, R. J. (2011). Stress Potentiates Early and Attenuates Late Stages of Visual Processing. The Journal of Neuroscience, 31(3), 1156–1161. https://doi.org/10.1523/jneurosci.3384-10.2011 Shimazu, A., & Schaufeli, W. B. (2007). Does Distraction Facilitate Problem-Focused Coping with Job Stress? A 1 year Longitudinal Study. Journal of Behavioral Medicine, 30(5), 423–434. Shiv, B., & Fedorikhin, A. (1999). Heart and Mind in Conflict: The Interplay of Affect and Cognition in Consumer Decision Making. Journal of Consumer Research, 26(3), 278–292. https://doi.org/10.1086/209563 Shu, Q., Tu, Q., & Wang, K. (2011). The Impact of Computer Self-Efficacy and Technology Dependence on Computer-Related Technostress: A Social Cognitive Theory Perspective. International Journal of Human-Computer Interaction, 27(10), 923–939. https://doi.org/10.1080/10447318.2011.555313 Sokol, K., & Flach, P. (2021). Explainability Is in the Mind of the Beholder: Establishing the Foundations of Explainable Artificial Intelligence (arXiv preprint arXiv:2112.14466). https://doi.org/10.48550/ARXIV.2112.14466 Sultana, T., & Nemati, H. R. (2021). Impact of Explainable AI and Task Complexity on Human-Machine Symbiosis. In Proceeding of the 27th Americas Conference on Information Systems (AMCIS 2021), 1715. Susanti, V. D., Dwijanto, D., & Mariani, S. (2021). Working Memory Dalam Pembelajaran Matematika: Sebuah Kajian Teori. Prima Magistra: Jurnal Ilmiah Kependidikan, 3(1), 62–70. Swann, W. B., Hixon, J. G., Stein-Seroussi, A., & Gilbert, D. T. (1990). The Fleeting Gleam of Praise: Cognitive Processes Underlying Behavioral Reactions to Self-relevant Feedback. Journal of Personality and Social Psychology, 59(1), 17–26. https://doi.org/10.1037/0022-3514.59.1.17 Sweller, J. (1988). Cognitive Load During Problem Solving: Effects on Learning. Cognitive Science, 12(2), 257–285. https://doi.org/10.1207/s15516709cog1202_4 Sweller, J. (1994). Cognitive Load Theory, Learning Difficulty, and Instructional Design. Learning and Instruction, 4(4), 295–312. https://doi.org/10.1016/0959-4752(94)90003-5 Szalma, J. L., Warm, J. S., Matthews, G., Dember, W. N., Weiler, E. M., Meier, A., Eggemeier, F. T., & Eggemeier, F. T. (2004). Effects of Sensory Modality and Task Duration on Performance, Workload, and Stress in Sustained Attention. Human Factors, 46(2), 219–233. https://doi.org/10.1518/hfes.46.2.219.37334 Tarafdar, M., Tu, Q., Ragu-Nathan, B. S., Ragu-Nathan, T. S., & Ragu-Nathan, T. S. (2007). The Impact of Technostress on Role Stress and Productivity. Journal of Management Information Systems, 24(1), 301–328. https://doi.org/10.2753/mis0742-1222240109 Tecce, J. J., Savignano-Bowman, J., & Meinbresse, D. (1976). Contingent Negative Variation and the Distraction—Arousal Hypothesis. Electroencephalography and Clinical Neurophysiology, 41(3), 277–286. https://doi.org/10.1016/0013-4694(76)90120-6 Tjoa, E., & Guan, C. (2020). A Survey on Explainable Artificial Intelligence (XAI): Toward Medical XAI. IEEE Transactions on Neural Networks, 32(11), 4793–4813. https://doi.org/10.1109/tnnls.2020.3027314 Travis, L. E. (1925). The Effect of a Small Audience Upon Eye-hand Coordination. The Journal of Abnormal and Social Psychology, 20(2), 142–146. https://doi.org/10.1037/h0071311 Tremblay, M., Rethans, J., & Dolmans, D. (2022). Task Complexity and Cognitive Load in Simulationbased Education: A Randomised Trial. Medical Education, 57(2), 161–169. Tu, Q., Wang, K., & Shu, Q. (2005). Computer-Related Technostress in China. Communications of The ACM, 48(4), 77–81. https://doi.org/10.1145/1053291.1053323 Uddin, M. J., Ferdous, M., Rahaman, A., & Ahmad, S. (2023). Mapping of Technostress Research Trends: A Bibliometric Analysis. In Proceeding of the 7th International Conference on Intelligent Computing and Control Systems (ICICCS 2023), 938–943. IEEE. https://doi.org/10.1109/iciccs56967.2023.10142487 Uehara, E., & Landeira-Fernandez, J. (2010). Um Panorama Sobre O Desenvolvimento Da Memória De Trabalho E Seus Prejuízos No Aprendizado Escolar. Ciências & Cognição, 15(2), 31–41. Van Merriënboer, J. J., & Sweller, J. S. (2010). Cognitive Load Theory in Health Professional Education: Design Principles and Strategies. Medical Education, 44(1), 85–93. Vignesh, M., & Thanesh, K. (2020). A Review on Artificial Intelligence (AI) is Future Generation. International Journal of Engineering Research & Technology (IJERT) Eclectic, 8(7). Vilone, G., & Longo, L. (2021). Classification of Explainable Artificial Intelligence Methods through Their Output Formats. Machine Learning and Knowledge Extraction, 3(3), 615–661. https://doi.org/10.3390/make3030032 Wang, D., Yang, Q., Abdul, A., & Lim, B. Y. (2019). Designing Theory-Driven User-Centric Explainable AI. In Proceeding of the 2019 CHI Conference on Human Factors in Computing Systems, 1–15. https://doi.org/10.1145/3290605.3300831 Wang, K., Shu, Q., & Tu, Q. (2008). Technostress Under Different Organizational Environments: An Empirical Investigation. Computers in Human Behavior, 24(6), 3002–3013. https://doi.org/10.1016/j.chb.2008.05.007 Ward, A., Vickers, Z. M., & Mann, T. (2000). Don’t Mind if I Do: Disinhibited Eating Under Cognitive Load. Journal of Personality and Social Psychology, 78(4), 753–763. https://doi.org/10.1037/0022-3514.78.4.753 Waugh, C. E., Shing, E. Z., & Furr, R. M. (2020). Not All Disengagement Coping Strategies Are Created Equal: Positive Distraction, but Not Avoidance, Can Be an Adaptive Coping Strategy for Chronic Life Stressors. Anxiety, Stress, & Coping, 33(5), 511–529. https://doi.org/10.1080/10615806.2020.1755820 Weil, M. M., & Rosen, L. D. (1997). TechnoStress: Coping with Technology @Work @Home @Play. J. Wiley. Yadav, P., Yadav, A., & Agrawal, R. (2022). Use of Artificial Intelligence in the Real World. International Journal of Computer Science and Mobile Computing, 11(12), 83–90. https://doi.org/10.47760/ijcsmc.2022.v11i12.008 Yaverbaum, G. J. (1988). Critical Factors in the User Environment: An Experimental Study of Users, Organizations and Tasks. Management Information Systems Quarterly, 12(1), 75–88. https://doi.org/10.2307/248807 Zajonc, R. B. (1965). Social Facilitation: A Solution is Suggested for an Old Unresolved Social Psychological Problem. Science, 149(3681), 269–274. https://doi.org/10.1126/science.149.3681.269 Zevenbergen, B., Woodruff, A., & Kelley, P. G. (2020). Explainability Case Studies (arXiv preprint arXiv:2009.00246). https://doi.org/10.48550/ARXIV.2009.00246 Zhang, K., & Aslan, A. B. (2021). AI Technologies for Education: Recent Research & Future Directions. Computers and Education: Artificial Intelligence, 2, 100025. https://doi.org/10.1016/j.caeai.2021.100025 Zhou, S.-J. (2004). Working Memory in Learning Disabled Children. Chinese Journal of Clinical Psychology, 12(3), 313–317. Zielonka, J. T. (2022). The Impact of Trust in Technology on the Appraisal of Technostress Creators in a Work-Related Context. In Proceeding of the 55th Hawaii International Conference on System Sciences. https://doi.org/10.24251/hicss.2022.715
描述 碩士
國立政治大學
資訊管理學系
110356035
資料來源 http://thesis.lib.nccu.edu.tw/record/#G0110356035
資料類型 thesis
dc.contributor.advisor 周致遠zh_TW
dc.contributor.advisor Chou, Chih-Yuanen_US
dc.contributor.author (Authors) 鄭安佑zh_TW
dc.contributor.author (Authors) Zheng, An-Youen_US
dc.creator (作者) 鄭安佑zh_TW
dc.creator (作者) Zheng, An-Youen_US
dc.date (日期) 2024en_US
dc.date.accessioned 1-Feb-2024 10:56:44 (UTC+8)-
dc.date.available 1-Feb-2024 10:56:44 (UTC+8)-
dc.date.issued (上傳時間) 1-Feb-2024 10:56:44 (UTC+8)-
dc.identifier (Other Identifiers) G0110356035en_US
dc.identifier.uri (URI) https://nccur.lib.nccu.edu.tw/handle/140.119/149469-
dc.description (描述) 碩士zh_TW
dc.description (描述) 國立政治大學zh_TW
dc.description (描述) 資訊管理學系zh_TW
dc.description (描述) 110356035zh_TW
dc.description.abstract (摘要) 隨著資通訊技術(ICTs)的持續發展,許多使用者也不經意地感受到水漲船高的壓力,我們通常稱之為「科技壓力」。這種與科技互動而產生的壓力,常以生產力下降、疲勞、與焦慮等負面效應展現。同時,人工智慧(AI)作為一種迅速發展的新興科技,一系列特殊的挑戰也應運而生。儘管在組織的決策過程中,AI的使用越來越廣泛,但AI自身決策機制並不透明,這種「黑箱」式決策降低了使用者對其的信任感,且產生的決策是否具有危險性也不得而知。有鑑於此,為了幫助AI的決策過程更加清晰,以提高使用者對其的信任,AI的可解釋性日益重要。然而,透過具可解釋性的AI(XAI)所獲得的終端使用者之信任感,是否會因為人們感受到的科技壓力而有所削弱,是一項值得討論的研究議題。因此,本研究透過分心衝突理論與認知負荷的角度,深入探討何種類別的XAI的解釋方法所帶來的信任會因為終端用戶既有的科技壓力而有所削弱。同時透過實驗與統計分析,檢測不同解釋方法對於信任的關係,以及科技壓力對該關係的調節作用。根據本研究之結果,我們發現科技壓力不會影響最終用戶對人工智慧解釋的信任,但科技壓力的存在本身與對AI的信任有顯著的負向關係。我們期許本研究可以為將來研究與實務上的應用提供一些啟發與參考。zh_TW
dc.description.abstract (摘要) With the continuous development of information and communication technologies (ICTs), many users inadvertently experience increasing pressure, often referred to as 'technostress'. This type of stress, arising from interactions with technology, is commonly manifested in negative effects such as decreased productivity, fatigue, and anxiety. Concurrently, artificial intelligence (AI), as a rapidly evolving technology, presents a series of unique challenges. Despite the growing use of AI in organizational decision-making processes, the opaque nature of AI decision-making mechanisms, often described as "black-box" decisions, undermines user trust. The potential risks associated with these AI-generated decisions are also uncertain. In light of this, the importance of AI explainability has grown significantly to enhance user trust in AI decision-making processes. However, whether the trust in explainable AI (XAI) among end-users is diminished due to perceived technostress is a research topic worthy of discussion. Therefore, this study delves into how certain types of XAI explanations might be less effective due to the existing technostress of end-users, exploring this through the lens of distraction-conflict theory and cognitive load. It also examines, through experiments and statistical analysis, the relationship between different explanatory methods and trust, as well as the moderating role of technostress in this relationship. Based on the results of this study, we found that technostress does not impact the end users' trust in AI explanations. However, the presence of technostress itself is significantly negatively related to trust in AI. We hope that this research can provide inspiration and references for future studies and practical applications.en_US
dc.description.tableofcontents CHAPTER 1. INTRODUCTION 1 CHAPTER 2. LITERATURE REVIEW 4 2.1 Technostress 4 2.2 Explainability in Artificial Intelligence 6 2.2.1 The Indispensability of Explainability 10 2.2.2 The Classifications of XAI Techniques 12 2.3 Distraction-Conflict Theory 20 2.3.1 Effects of Arousal and Cognitive Load on Dominant Responses 22 2.3.2 Cognitive Load and Working Memory 24 2.3.3 Trust in Distraction-Conflict Theory 27 CHAPTER 3. RESEARCH FRAMEWORK 30 3.1 Research Model 30 3.2 Hypothesis Development 32 3.2.1 The Effect of XAI Explanation Type on Trust 32 3.2.2 The Comparison of XAI Explanation Types on Trust 34 3.2.3 The Effect of Technostress 34 3.3 Construct Measurements 35 3.3.1 Perceived Technostress (PT) 35 3.3.2 AI Explanation Type (AIET) 37 3.3.3 Trust in AI (TAI) 39 3.3.4 Control Variables 40 CHAPTER 4. RESEARCH METHODOLOGY 41 4.1 Data Collection 41 4.2 Data Analysis 44 CHAPTER 5. RESEARCH RESULTS 46 5.1 Measurement Model Test 46 5.2 Structural Model Test 50 5.3 Model Generalization 56 CHAPTER 6. DISCUSSION 57 6.1 Interpretation of Results 57 6.2 Theoretical Contribution 60 6.3 Practical Implications 61 6.4 Limitations and Future Research 62 CHAPTER 7. CONCLUSIONS 64 REFERENCES 65 APPENDIX A 80zh_TW
dc.format.extent 1762995 bytes-
dc.format.mimetype application/pdf-
dc.source.uri (資料來源) http://thesis.lib.nccu.edu.tw/record/#G0110356035en_US
dc.subject (關鍵詞) 人工智慧zh_TW
dc.subject (關鍵詞) 可解釋性zh_TW
dc.subject (關鍵詞) 科技壓力zh_TW
dc.subject (關鍵詞) 分心衝突理論zh_TW
dc.subject (關鍵詞) 認知負荷zh_TW
dc.subject (關鍵詞) 主導反應zh_TW
dc.subject (關鍵詞) Artificial Intelligenceen_US
dc.subject (關鍵詞) Explainabilityen_US
dc.subject (關鍵詞) Technostressen_US
dc.subject (關鍵詞) Distraction-Conflict Theoryen_US
dc.subject (關鍵詞) Cognitive Loaden_US
dc.subject (關鍵詞) Dominant Responseen_US
dc.title (題名) 促進信任的XAI:科技壓力下人工智慧解釋方法的影響分析zh_TW
dc.title (題名) XAI for Trust: Analyzing the Impact of Explanation Methods under Technostressen_US
dc.type (資料類型) thesisen_US
dc.relation.reference (參考文獻) Abdul, A., Vermeulen, J., Wang, D., Lim, B. Y., & Kankanhalli, M. S. (2018). Trends and Trajectories for Explainable, Accountable and Intelligible Systems: An HCI Research Agenda. In Proceeding of the 2018 CHI Conference on Human Factors in Computing Systems, 1–18. https://doi.org/10.1145/3173574.3174156 Abdul-Gader, A. H., & Kozar, K. A. (1995). The Impact of Computer Alienation on Information Technology Investment Decisions: An Exploratory Cross-National Analysis. Management Information Systems Quarterly, 19(4), 535–559. https://doi.org/10.2307/249632 Åborg, C., & Billing, A. (2003). Health Effects of ‘The Paperless Office’ – Evaluations of the Introduction of Electronic Document Handling Systems. Behaviour & Information Technology, 22(6), 389–396. https://doi.org/10.1080/01449290310001624338 Adadi, A., & Berrada, M. (2018). Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI). IEEE Access, 6, 52138–52160. https://doi.org/10.1109/ACCESS.2018.2870052 Aggarwal, A., Lohia, P., Nagar, S., Dey, K., & Saha, D. (2019). Black Box Fairness Testing of Machine Learning Models. In Proceeding of the 27th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering, 625–635. https://doi.org/10.1145/3338906.3338937 Alicioglu, G., & Sun, B. (2021). A Survey of Visual Analytics for Explainable Artificial Intelligence Methods. Computers & Graphics, 102, 502–520. https://doi.org/10.1016/j.cag.2021.09.002 Alloway, T. P. (2006). How Does Working Memory Work in the Classroom? Educational Research and Reviews, 1(4), 134–139. Amat, B. (2021). Analysis of the Impact of Artificial Intelligence on Our Lives. The Frontiers of Society, Science and Technology, 3(7), 45–50. https://doi.org/10.25236/FSST.2021.030709 Araujo, T., Helberger, N., Kruikemeier, S., & de Vreese, C. H. (2020). In AI we trust? Perceptions about automated decision-making by artificial intelligence. AI & Society, 35(3), 611–623. https://doi.org/10.1007/s00146-019-00931-w Arrieta, A. B., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., García, S., Gil-Lopez, S., Molina, D., Benjamins, R., Chatila, R., Francisco Herrera, Herrera, F., & Herrera, F. S. (2020). Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI. Information Fusion, 58, 82–115. https://doi.org/10.1016/j.inffus.2019.12.012 Arya, V., Bellamy, R. K., Chen, P.-Y., Dhurandhar, A., Hind, M., Hoffman, S. C., Houde, S., Liao, Q. V., Luss, R., Mojsilović, A., Mourad, S., Pedemonte, P., Raghavendra, R., Richards, J., Sattigeri, P., Shanmugam, K., Singh, M., Varshney, K. R., Wei, D., & Zhang, Y. (2019). One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques. arXiv: Artificial Intelligence. https://doi.org/10.48550/arXiv.1909.03012 Ayyagari, R., Grover, V., & Purvis, R. (2011). Technostress: Technological Antecedents and Implications. Management Information Systems Quarterly, 35(4), 831–858. https://doi.org/10.2307/41409963 Baddeley, A. (1992). Working Memory: The Interface between Memory and Cognition. Journal of Cognitive Neuroscience, 4(3), 281–288. Bannert, M. (2002). Managing Cognitive Load—Recent Trends in Cognitive Load Theory. Learning and Instruction, 12(1), 139–146. Baron, R. S. (1986). Distraction-Conflict Theory: Progress and Problems. Advances in Experimental Social Psychology, 19, 1–40. https://doi.org/10.1016/S0065-2601(08)60211-7 Baumeister, R. F., & Bushman, B. J. (2020). Social psychology and human nature. Cengage Learning. Becker, J. T., & Morris, R. G. (1999). Working Memory(s). Brain and Cognition, 41(1), 1–8. Benitez, J. M., Castro, J. L., & Requena, I. (1997). Are Artificial Neural Networks Black Boxes? IEEE Transactions on Neural Networks, 8(5), 1156–1164. https://doi.org/10.1109/72.623216 Benjamin, D. J., Brown, S. A., & Shapiro, J. M. (2013). Who is ‘Behavioral’? Cognitive Ability and Anomalous Preferences. Journal of the European Economic Association, 11(6), 1231–1255. https://doi.org/10.2139/ssrn.675264 Bohner, G., Ruder, M., Erb, H.-P., & Erb, H.-P. (2002). When Expertise Backfires: Contrast and Assimilation Effects in Persuasion. British Journal of Social Psychology, 41(4), 495–519. https://doi.org/10.1348/014466602321149858 Bologna, G., & Hayashi, Y. (2017). Characterization of Symbolic Rules Embedded in Deep DIMLP Networks: A Challenge to Transparency of Deep Learning. Journal of Artificial Intelligence and Soft Computing Research, 7(4), 265–286. https://doi.org/10.1515/jaiscr-2017-0019 Brillhart, P. E. (2004). Technostress in the Workplace: Managing Stress in the Electronic Workplace. Journal of American Academy of Business, 5(1/2), 302–307. Brod, C. (1982). Managing Technostress: Optimizing the Use of Computer Technology. The Personnel Journal, 61(10), 753–757. Brod, C. (1984). Technostress: The human cost of the computer revolution. Addison-Wesley. Brosnan, M. (1998). Technophobia: The psychological impact of information technology. Routledge. Brown, R., Miller, G. A., Galanter, E., & Pribram, K. H. (1960). Plans and the Structure of Behavior. Language, 36(4), 527–532. https://doi.org/10.2307/411065 Cahour, B., & Forzy, J.-F. (2009). Does Projection Into Use Improve Trust and Exploration? An Example With a Cruise Control System. Safety Science, 47(9), 1260–1270. https://doi.org/10.1016/j.ssci.2009.03.015 Carabantes, M. (2020). Black-box Artificial Intelligence: An Epistemological and Critical Analysis. AI & SOCIETY, 35(2), 309–317. https://doi.org/10.1007/s00146-019-00888-w Castets-Renard, C. (2019). Accountability of Algorithms in the GDPR and beyond: A European Legal Framework on Automated Decision-Making. Fordham Intellectual Property, Media & Entertainment Law Journal, 30(1), 91–138. https://doi.org/10.2139/ssrn.3391266 Chandler, P., & Sweller, J. (1996). Cognitive Load While Learning to Use a Computer Program. Applied Cognitive Psychology, 10(2), 151–170. Chen, F., & National Information Communication Technology Australia Ltd Eveleigh. (2011). Effects of cognitive load on trust. Nat. ICT Australia Limited. Chico, V. (2018). The Impact of the General Data Protection Regulation on Health Research. British Medical Bulletin, 128(1), 109–118. https://doi.org/10.1093/bmb/ldy038 Colavita, F. B., & Weisberg, D. (1979). A Further Investigation of Visual Dominance. Perception & Psychophysics, 25(4), 345–347. Compeau, D., Higgins, C. A., & Huff, S. L. (1999). Social Cognitive Theory and Individual Reactions to Computing Technology: A Longitudinal Study. Management Information Systems Quarterly, 23(2), 145–158. https://doi.org/10.2307/249749 Confalonieri, R., Coba, L., Wagner, B., & Besold, T. R. (2021). A Historical Perspective of Explainable Artificial Intelligence. WIREs Data Mining and Knowledge Discovery, 11(1), e1391. https://doi.org/10.1002/widm.1391 Conway, A. R. A., Cowan, N., & Bunting, M. F. (2001). The Cocktail Party Phenomenon Revisited: The Importance of Working Memory Capacity. Psychonomic Bulletin & Review, 8(2), 331–335. https://doi.org/10.3758/BF03196169 Cooper, C. L., Dewe, P., & O’Driscoll, M. P. (2001). Organizational stress: A review and critique of theory, research, and applications. Sage. Cooper, G. (1990). Cognitive Load Theory as an Aid for Instructional Design. Australasian Journal of Educational Technology, 6(2), 108–113. Cowan, N. (2008). What Are the Differences Between Long-Term, Short-Term, and Working Memory? Progress in Brain Research, 169, 323–338. https://doi.org/10.1016/s0079-6123(07)00020-9 Davenport, T. H., & Harris, J. G. (2005). Automated Decision Making Comes of Age. MIT Sloan Management Review, 46(4), 83–89. de Sousa, W. G., de Melo, E. R. P., de Souza Bermejo, P. H., Farias, R. A. S., & de Oliveira Gomes, A. (2019). How and Where is Artificial Intelligence in the Public Sector Going? A Literature Review and Research Agenda. Government Information Quarterly, 36(4), 101392. https://doi.org/10.1016/j.giq.2019.07.004 DeCamp, M., & Tilburt, J. C. (2019). Why We Cannot Trust Artificial Intelligence in Medicine. The Lancet Digital Health, 1(8), e390. https://doi.org/10.1016/S2589-7500(19)30197-9 DeMaagd, G. R. (1983). Management Information Systems. Management Accounting, 65(4), 10–71. Demajo, L. M., Vella, V., & Dingli, A. (2020). Explainable AI for Interpretable Credit Scoring. Computer Science & Information Technology (CS & IT), 185–203. https://doi.org/10.5121/csit.2020.101516 D’Esposito, M. (2007). From Cognitive to Neural Models of Working Memory. Philosophical Transactions of the Royal Society B: Biological Sciences, 362(1481), 761–772. Dobrescu, E. M., & Dobrescu, E. M. (2018). Artificial Intelligence (AI)—The Technology That Shapes The World. 6(2), 71–81. Dong, Q., & Howard, T. (2006). Emotional Intelligence, Trust and Job Satisfaction. Competition Forum, 4(2), 381–388. Druce, J., Harradon, M., & Tittle, J. (2021). Explainable Artificial Intelligence (XAI) for Increasing User Trust in Deep Reinforcement Learning Driven Autonomous Systems (arXiv preprint arXiv:2106.03775). https://doi.org/10.48550/ARXIV.2106.03775 Duell, J. A. (2021). A Comparative Approach to Explainable Artificial Intelligence Methods in Application to High-Dimensional Electronic Health Records: Examining the Usability of XAI (arXiv preprint arXiv:2103.04951). https://doi.org/10.48550/ARXIV.2103.04951 Duffy, S., & Smith, J. (2014). Cognitive Load in the Multi-Player Prisoner’s Dilemma Game: Are There Brains in Games? Journal of Behavioral and Experimental Economics, 51, 47–56. https://doi.org/10.2139/ssrn.1841523 Dzindolet, M. T., Peterson, S. A., Pomranky, R. A., Pierce, L. G., & Beck, H. P. (2003). The Role of Trust in Automation Reliance. International Journal of Human-Computer Studies, 58(6), 697–718. https://doi.org/10.1016/S1071-5819(03)00038-7 Elfering, A., Grebner, S., K. Semmer, N., Kaiser‐Freiburghaus, D., Lauper‐Del Ponte, S., & Witschi, I. (2005). Chronic Job Stressors and Job Control: Effects on Event‐Related Coping Success and Well‐being. Journal of Occupational and Organizational Psychology, 78(2), 237–252. https://doi.org/10.1348/096317905X40088 Epley, N., Waytz, A., & Cacioppo, J. T. (2007). On Seeing Human: A Three-factor Theory of Anthropomorphism. Psychological Review, 114(4), 864–886. https://doi.org/10.1037/0033-295X.114.4.864 Ericsson, K. A., Ericsson, K. A., & Kintsch, W. (1995). Long-term Working Memory. Psychological Review, 102(2), 211–245. https://doi.org/10.1037/0033-295x.102.2.211 Esteva, A., Kuprel, B., Novoa, R. A., Ko, J. M., Swetter, S. M., Blau, H. M., & Thrun, S. (2017). Dermatologist-Level Classification of Skin Cancer With Deep Neural Networks. Nature, 542(7639), 115–118. https://doi.org/10.1038/nature21056 Eysenck, M. W. (1992). Anxiety: The Cognitive Perspective. Psychology Press. Fahrenkamp-Uppenbrink, J. (2019). What Good is a Black Box? Science, 364(6435), 38.16-40. Främling, K. (2020). Explainable AI without Interpretable Model (arXiv preprint arXiv:2009.13996). https://doi.org/10.48550/ARXIV.2009.13996 Gefen, D., Straub, D., & Boudreau, M.-C. (2000). Structural Equation Modeling and Regression: Guidelines for Research Practice. Communications of the Association for Information Systems, 4. https://doi.org/10.17705/1CAIS.00407 Ghosh, S., & Singh, A. (2020). The Scope of Artificial Intelligence in Mankind: A Detailed Review. Journal of Physics: Conference Series, 1531, 012045. https://doi.org/10.1088/1742-6596/1531/1/012045 Gilbert, D. T. (1989). Thinking lightly about others: Automatic components of the social inference process. In Unintended thought (pp. 189–211). The Guilford Press. Gilbert, D. T., Pelham, B. W., & Krull, D. S. (1988). On Cognitive Busyness: When Person Perceivers Meet Persons Perceived. Journal of Personality and Social Psychology, 54(5), 733–740. https://doi.org/10.1037/0022-3514.54.5.733 Ginns, P., & Leppink, J. (2019). Special Issue on Cognitive Load Theory: Editorial. Educational Psychology Review, 31(2), 255–259. Goldman, C. S., & Wong, E. H. (1997). Stress and the College Student. Education 3-13, 117(4), 604–611. Goodman, B., & Flaxman, S. (2017). European Union Regulations on Algorithmic Decision-Making and a “Right to Explanation”. AI Magazine, 38(3), 50–57. https://doi.org/10.1609/aimag.v38i3.2741 Gunning, D., Stefik, M., Choi, J., Miller, T., Stumpf, S., & Yang, G.-Z. (2019). XAI—Explainable Artificial Intelligence. Science Robotics, 4(37), eaay7120. Gupta, K., Hajika, R., Pai, Y. S., Duenser, A., Lochner, M., & Billinghurst, M. (2019). In AI We Trust: Investigating the Relationship between Biosignals, Trust and Cognitive Load in VR. Proceedings of the 25th ACM Symposium on Virtual Reality Software and Technology, 1–10. https://doi.org/10.1145/3359996.3364276 Gyasi, E. A., Handroos, H., & Kah, P. (2019). Survey on Artificial Intelligence (AI) Applied in Welding: A Future Scenario of the Influence of AI on Technological, Economic, Educational and Social Changes. Procedia Manufacturing, 38, 702–714. https://doi.org/10.1016/j.promfg.2020.01.095 Haldorai, A., Murugan, S., Umamaheswari, K., & Ramu, A. (2020). Evolution, Challenges, and Application of Intelligent ICT Education: An Overview. Computer Applications in Engineering Education, 29(3), 562–571. https://doi.org/10.1002/cae.22217 Haleem, A., Javaid, M., & Khan, I. H. (2019). Current Status and Applications of Artificial Intelligence (AI) in Medical Field: An Overview. Current Medicine Research and Practice, 9(6), 231–237. https://doi.org/10.1016/j.cmrp.2019.11.005 Hamet, P., & Tremblay, J. (2017). Artificial Intelligence in Medicine. Metabolism, 69, S36–S40. https://doi.org/10.1016/j.metabol.2017.01.011 Haqiq, N., Zaim, M., Bouganssa, I., Salbi, A., & Sbihi, M. (2022). AIoT with I4.0: The Effect of Internet of Things and Artificial Intelligence Technologies on the Industry 4.0. ITM Web of Conferences, 46, 03002. EDP Sciences. https://doi.org/10.1051/itmconf/20224603002 Harkut, D. G., & Kasat, K. (2019). Artificial intelligence—Scope and limitations. IntechOpen. https://doi.org/10.5772/intechopen.77611 Hayes-Roth, F., & Jacobstein, N. (1994). The State of Knowledge-Based Systems. Communications of the ACM, 37(3), 26–39. https://doi.org/10.1145/175247.175249 Heinssen, R. K., Glass, C. R., & Knight, L. A. (1987). Assessing Computer Anxiety: Development and Validation of the Computer Anxiety Rating Scale. Computers in Human Behavior, 3(1), 49–59. https://doi.org/10.1016/0747-5632(87)90010-0 Herm, L.-V. (2023). Impact of Explainable AI on Cognitive Load: Insights From an Empirical Study. In Proceeding of the 31st European Conference on Information Systems (ECIS 2023). https://doi.org/10.48550/arxiv.2304.08861 Hernandez, I., & Preston, J. L. (2013). Disfluency Disrupts the Confirmation Bias. Journal of Experimental Social Psychology, 49(1), 178–182. https://doi.org/10.1016/j.jesp.2012.08.010 Hinson, J. M., Jameson, T. L., & Whitney, P. (2002). Somatic Markers, Working Memory, and Decision Making. Cognitive, Affective, & Behavioral Neuroscience, 2(4), 341–353. https://doi.org/10.3758/cabn.2.4.341 Hockey, G. R. J. (1997). Compensatory Control in the Regulation of Human Performance Under Stress and High Workload; A Cognitive-Energetical Framework. Biological Psychology, 45(1), 73–93. https://doi.org/10.1016/s0301-0511(96)05223-4 Hoffman, R. R., Mueller, S. T., Klein, G., & Litman, J. (2023). Measures for Explainable AI: Explanation Goodness, User Satisfaction, Mental Models, Curiosity, Trust, and Human-AI Performance. Frontiers of Computer Science, 5, 1096257. https://doi.org/10.3389/fcomp.2023.1096257 Holzinger, A., Langs, G., Denk, H., Zatloukal, K., & Müller, H. (2019). Causability and Explainability of Artificial Intelligence in Medicine. WIREs Data Mining and Knowledge Discovery, 9(4), e1312. https://doi.org/10.1002/widm.1312 Ibrahim, R. Z. A. R., Bakar, A. A., & Nor, S. B. M. (2007). Techno Stress: A Study Among Academic and Non Academic Staff. Lecture Notes in Computer Science, 4566, 118–124. https://doi.org/10.1007/978-3-540-73333-1_15 Islam, M. R., Ahmed, M. U., Barua, S., & Begum, S. (2022). A Systematic Review of Explainable Artificial Intelligence in Terms of Different Application Domains and Tasks. Applied Sciences, 12(3), 1353. https://doi.org/10.3390/app12031353 Janson, J., & Rohleder, N. (2017). Distraction Coping Predicts Better Cortisol Recovery After Acute Psychosocial Stress. Biological Psychology, 128, 117–124. https://doi.org/10.1016/j.biopsycho.2017.07.014 Jex, S. M., & Beehr, T. A. (1991). Emerging Theoretical and Methodological Issues in the Study of Work-related Stress. Research in Personnel and Human Resources Management, 9(31), l–365. Jiménez-Luna, J., Grisoni, F., & Schneider, G. (2020). Drug Discovery With Explainable Artificial Intelligence. Nature Machine Intelligence, 2(10), 573–584. https://doi.org/10.1038/s42256-020-00236-4 Kalyuga, S. (2011). Cognitive Load Theory: Implications for Affective Computing. In Proceeding of the 24th International FLAIRS Conference. Kaplan, A., & Haenlein, M. (2019). Siri, Siri, in My Hand: Who’s the Fairest in the Land? On the Interpretations, Illustrations, and Implications of Artificial Intelligence. Business Horizons, 62(1), 15–25. https://doi.org/10.1016/j.bushor.2018.08.004 Karran, A. J., Demazure, T., Hudon, A., Senecal, S., & Léger, P.-M. (2022). Designing for Confidence: The Impact of Visualizing Artificial Intelligence Decisions. Frontiers in Neuroscience, 16, 883385. https://doi.org/10.3389/fnins.2022.883385 Kassin, S. M., Fein, S., & Markus, H. R. (2011). Social psychology (8th ed). Sage. Katzir, M., & Posten, A.-C. (2023). Are There Dominant Response Tendencies for Social Reactions? Trust Trumps Mistrust—Evidence From a Dominant Behavior Measure (DBM). Journal of Personality and Social Psychology, 125(1), 57–81. https://doi.org/10.1037/pspa0000334 Kaun, A. (2022). Suing the Algorithm: The Mundanization of Automated Decision-Making in Public Services Through Litigation. Information, Communication & Society, 25(14), 2046–2062. https://doi.org/10.1080/1369118X.2021.1924827 Kirsh, D. (2000). A Few Thoughts on Cognitive Overload. Intellectica, 30(1), 19–51. https://doi.org/10.3406/intel.2000.1592 Klingberg, T. (2008). The overflowing brain: Information overload and the limits of working memory. Oxford University Press. https://doi.org/10.5860/choice.46-5905 Koh, P. W., & Liang, P. (2017). Understanding Black-Box Predictions via Influence Functions. In Proceeding of the 34th International Conference on Machine Learning, 1885–1894. PMLR. Kraus, J., Scholz, D., Stiegemeier, D., & Baumann, M. (2020). The More You Know: Trust Dynamics and Calibration in Highly Automated Driving and the Effects of Take-Overs, System Malfunction, and System Transparency: Human Factors, 62(5), 718–736. https://doi.org/10.1177/0018720819853686 Kulesza, T., Stumpf, S., Burnett, M., Yang, S., Kwan, I., & Wong, W.-K. (2013). Too Much, Too Little, or Just Right? Ways Explanations Impact End Users’ Mental Models. In Joint Proceedings of the 2013 IEEE Symposium on Visual Languages and Human Centric Computing, 3–10. IEEE. https://doi.org/10.1109/VLHCC.2013.6645235 Kumain, S. C., Kumain, K., & Chaudhary, P. (2020). AI Impact on Various Domain: An Overview. International Journal of Management, 11(190), 1433–1439. Larmuseau, C., Coucke, H., Kerkhove, P., Desmet, P., & Depaepe, F. (2019). Cognitive Load During Online Complex Problem-Solving in a Teacher Training Context. In Proceeding of the European Distance and E-Learning Network Conference Proceedings (EDEN 2019), 466–474. Laudon, K. C., & Laudon, J. P. (2022). Management information systems: Managing the digital firm (17th edition). Pearson. Lazarus, R. S., & Folkman, S. (1987). Transactional Theory and Research on Emotions and Coping. European Journal of Personality, 1(3), 141–169. https://doi.org/10.1002/per.2410010304 Lee, J. (2018). Task Complexity, Cognitive Load, and L1 Speech. Applied Linguistics, 40(3), 506–539. Li, H., Yu, L., & He, W. (2019). The Impact of GDPR on Global Technology Development. Journal of Global Information Technology Management, 22(1), 1–6. https://doi.org/10.1080/1097198X.2019.1569186 Liu, C.-F., Chen, Z.-C., Kuo, S.-C., & Lin, T.-C. (2022). Does AI Explainability Affect Physicians’ Intention to Use AI? International Journal of Medical Informatics, 168, 104884. https://doi.org/10.1016/j.ijmedinf.2022.104884 Lundberg, S. M., & Lee, S.-I. (2017). A Unified Approach to Interpreting Model Predictions. In Proceeding of the 31st International Conference on Neural Information Processing Systems, 30, 4768–4777. Madsen, M., & Gregor, S. (2000). Measuring Human-Computer Trust. In Proceeding of the 11st Australasian Conference on Information Systems, 53, 6–8. Mahalakshmi, S., & Latha, R. (2019). Artificial Intelligence with the Internet of Things on Healthcare systems: A Survey. International Journal of Advanced Trends in Computer Science and Engineering, 8(6), 2847–2854. https://doi.org/10.30534/ijatcse/2019/27862019 Mahapatra, M., & Pillai, R. (2018). Technostress in Organizations: A Review of Literature. In Proceeding of the 26th European Conference on Information Systems (ECIS 2018), 99. Marcoulides, G. A. (1989). Measuring Computer Anxiety: The Computer Anxiety Scale. Educational and Psychological Measurement, 49(3), 733–739. https://doi.org/10.1177/001316448904900328 Markus, A. F., Kors, J. A., & Rijnbeek, P. R. (2021). The Role of Explainability in Creating Trustworthy Artificial Intelligence for Health Care: A Comprehensive Survey of the Terminology, Design Choices, and Evaluation Strategies. Journal of Biomedical Informatics, 113, 103655. https://doi.org/10.1016/j.jbi.2020.103655 Mazmanian, M., Orlikowski, W. J., & Yates, J. (2013). The Autonomy Paradox: The Implications of Mobile Email Devices for Knowledge Professionals. Organization Science, 24(5), 1337–1357. https://doi.org/10.1287/orsc.1120.0806 Merry, M., Riddle, P., & Warren, J. (2021). A Mental Models Approach for Defining Explainable Artificial Intelligence. BMC Medical Informatics and Decision Making, 21(1), 344. https://doi.org/10.1186/s12911-021-01703-7 Mishra, S., Prakash, M., Hafsa, A., & Anchana, G. (2018). Anfis to Detect Brain Tumor Using MRI. International Journal of Engineering and Technology, 7(3.27), 209–214. https://doi.org/10.14419/ijet.v7i3.27.17763 Mohammadi, B., Malik, N., Derdenger, T., & Srinivasan, K. (2022). Sell Me the Blackbox! Regulating eXplainable Artificial Intelligence (XAI) May Harm Consumers (arXiv preprint arXiv:2209.03499). https://doi.org/10.48550/ARXIV.2209.03499 Montavon, G., Samek, W., Samek, W., & Müller, K.-R. (2018). Methods for Interpreting and Understanding Deep Neural Networks. Digital Signal Processing, 73, 1–15. https://doi.org/10.1016/j.dsp.2017.10.011 Nagahisarchoghaei, M., Nur, N., Cummins, L., Nur, N., Karimi, M. M., Nandanwar, S., Bhattacharyya, S., & Rahimi, S. (2023). An Empirical Survey on Explainable AI Technologies: Recent Trends, Use-Cases, and Categories from Technical and Application Perspectives. Electronics, 12(5), 1092. https://doi.org/10.3390/electronics12051092 Nunnally, J. (1978). Psychometric theory (2nd ed). McGraw-Hill. Oei, N. Y. L., Everaerd, W. T., Elzinga, B. M., van Well, S., & Bermond, B. (2006). Psychosocial Stress Impairs Working Memory at High Loads: An Association With Cortisol Levels and Memory Retrieval. Stress, 9(3), 133–141. https://doi.org/10.1080/10253890600965773 Oei, N. Y. L., Veer, I. M., Wolf, O. T., Spinhoven, P., Rombouts, S. A. R. B., & Elzinga, B. M. (2011). Stress Shifts Brain Activation Towards Ventral ‘Affective’ Areas During Emotional Distraction. Social Cognitive and Affective Neuroscience, 7(4), 403–412. https://doi.org/10.1093/scan/nsr024 Palacio, S., Lucieri, A., Munir, M., Ahmed, S., Hees, J., & Dengel, A. (2021). XAI Handbook: Towards a Unified Framework for Explainable AI. In Proceeding of the IEEE/CVF International Conference on Computer Vision, 3766–3775. https://doi.org/10.1109/iccvw54120.2021.00420 Parayitam, S., & Dooley, R. S. (2009). The Interplay Between Cognitive- And Affective Conflict and Cognition- And Affect-based Trust in Influencing Decision Outcomes. Journal of Business Research, 62(8), 789–796. Parsons, K. S., Warm, J. S., Warm, J. S., Nelson, W. T., Matthews, G., & Riley, M. A. (2007). Detection-Action Linkage in Vigilance: Effects on Workload and Stress. 51(19), 1291–1295. https://doi.org/10.1177/154193120705101902 Payrovnaziri, S. N., Chen, Z., Rengifo-Moreno, P., Miller, T., Bian, J., Chen, J. H., Liu, X., & He, Z. (2020). Explainable Artificial Intelligence Models Using Real-world Electronic Health Record Data: A Systematic Scoping Review. Journal of the American Medical Informatics Association, 27(7), 1173–1185. https://doi.org/10.1093/jamia/ocaa053 Pessin, J. (1933). The Comparative Effects of Social and Mechanical Stimulation on Memorizing. The American Journal of Psychology, 45(2), 263–270. https://doi.org/10.2307/1414277 Petkovic, D. (2023). It is Not ‘Accuracy Vs. Explainability’—We Need Both for Trustworthy AI Systems. IEEE Transactions on Technology and Society, 4(1), 46–53. https://doi.org/10.48550/arxiv.2212.11136 Pogash, R. M., Streufert, S., Streufert, S. C., Milton, S., & Hershey Medical Center Hershey PA Dept of Behavioral Science. (1983). Effects of Task Load, Task Speed and Cognitive Complexity on Satisfaction. Puri, R. (2015). Mindfulness as a Stress Buster in Organizational Set Up. International Journal of Education and Management Studies, 5(4), 366–368. Rabbitt, P. (1978). Hand Dominance, Attention, and the Choice Between Responses. Quarterly Journal of Experimental Psychology, 30(3), 407–416. Radue, E.-W., Weigel, M., Wiest, R., & Urbach, H. (2016). Introduction to Magnetic Resonance Imaging for Neurologists. Continuum: Lifelong Learning in Neurology, 22(5), 1379–1398. https://doi.org/10.1212/con.0000000000000391 Ragu-Nathan, T. S., Tarafdar, M., Ragu-Nathan, B. S., & Tu, Q. (2008). The Consequences of Technostress for End Users in Organizations: Conceptual Development and Empirical Validation. Information Systems Research, 19(4), 417–433. https://doi.org/10.1287/isre.1070.0165 Raio, C. M., Orederu, T. A., Palazzolo, L., Shurick, A. A., & Phelps, E. A. (2013). Cognitive Emotion Regulation Fails the Stress Test. Proceedings of the National Academy of Sciences, 110(37), 15139–15144. https://doi.org/10.1073/pnas.1305706110 Regan, D. T., & Cheng, J. B. (1973). Distraction and Attitude Change: A Resolution. Journal of Experimental Social Psychology, 9(2), 138–147. Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). ‘Why Should I Trust You?’: Explaining the Predictions of Any Classifier. In Proceeding of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144. https://doi.org/10.1145/2939672.2939778 Riedl, R., Kindermann, H., Auinger, A., & Javor, A. (2012). Technostress from a Neurobiological Perspective—System Breakdown Increases the Stress Hormone Cortisol in Computer Users. 4(2), 61–69. https://doi.org/10.1007/s12599-012-0207-7 Rosen, L. D., Sears, D. C., & Weil, M. M. (1993). Treating Technophobia: A Longitudinal Evaluation of the Computerphobia Reduction Program. Computers in Human Behavior, 9(1), 27–50. https://doi.org/10.1016/0747-5632(93)90019-o Rudnick, A. (2019). The Black Box Myth: Artificial Intelligence’s Threat Re-Examined. International Journal of Extreme Automation and Connectivity in Healthcare, 1(1), 1–3. https://doi.org/10.4018/IJEACH.2019010101 Rydval, O. (2012). The Causal Effect of Cognitive Abilities on Economic Behavior: Evidence from a Forecasting Task with Varying Cognitive Load. CERGE-EI Working Paper Series. https://doi.org/10.2139/ssrn.2046942 Salanova, M., Llorens, S., & Ventura, M. (2014). Technostress: The Dark Side of Technologies. The Impact of ICT on Quality of Working Life, 87–103. Dordrecht: Springer Netherlands. https://doi.org/10.1007/978-94-017-8854-0_6 Salimi, A., & Dadashpour, S. (2012). Task Complexity and Language Production Dilemmas (Robinson’s Cognition Hypothesis vs. Skehan’s Trade-off Model). Procedia - Social and Behavioral Sciences, 46, 643–652. Salo, M., Pirkkalainen, H., Chua, C. E. H., & Koskelainen, T. (2022). Formation and Mitigation of Technostress in the Personal Use of IT. Management Information Systems Quarterly, 46(2), 1073–1108. https://doi.org/10.25300/misq/2022/14950 Samek, W., Montavon, G., Vedaldi, A., Hansen, L. K., & Müller, K.-R. (Eds.). (2019). Explainable AI: Interpreting, explaining and visualizing deep learning (Vol. 11700). Springer. https://doi.org/10.1007/978-3-030-28954-6 Samson, K., & Kostyszyn, P. (2015). Effects of Cognitive Load on Trusting Behavior – An Experiment Using the Trust Game. PLOS ONE, 10(5), 1–10. https://doi.org/10.1371/journal.pone.0127680 Sanajou, N., Zohali, L., & Zabihi, F. (2017). Do Task Complexity Demands Influence the Learners’ Perception of Task Difficulty? International Journal of Applied Linguistics and English Literature, 6(6), 71–77. Sanders, G. S. (1981). Driven by Distraction: An Integrative Review of Social Facilitation Theory and Research. Journal of Experimental Social Psychology, 17(3), 227–251. Sanders, G. S., Baron, R. S., & Moore, D. L. (1978). Distraction and Social Comparison as Mediators of Social Facilitation Effects. Journal of Experimental Social Psychology, 14(3), 291–303. Sarason, I. G., & Sarason, I. G. (1980). Test Anxiety: Theory, Research, and Applications. L. Erlbaum Associates. Sartaj, B., Ankita, K., Prajakta, B., Sameer, D., Navoneel, C., & Swati, K. (2020). Brain Tumor Classification (MRI) [dataset]. https://doi.org/10.34740/kaggle/dsv/1183165 Scheerer, M., & Reussner, R. (2021). Reliability Prediction of Self-Adaptive Systems Managing Uncertain AI Black-Box Components. 2021 International Symposium on Software Engineering for Adaptive and Self-Managing Systems (SEAMS), 111–117. IEEE. https://doi.org/10.1109/SEAMS51251.2021.00024 Schmeck, A., Opfermann, M., van Gog, T., Paas, F., & Leutner, D. (2015). Measuring Cognitive Load With Subjective Rating Scales During Problem Solving: Differences Between Immediate and Delayed Ratings. Instructional Science, 43(1), 93–114. https://doi.org/10.1007/s11251-014-9328-3 Schoeffer, J., Machowski, Y., & Kuehl, N. (2021). A Study on Fairness and Trust Perceptions in Automated Decision Making. In Joint Proceedings of the ACM IUI 2021 Workshops, 170005. https://doi.org/10.5445/IR/1000130551 Schoofs, D., Wolf, O. T., & Smeets, T. (2009). Cold Pressor Stress Impairs Performance on Working Memory Tasks Requiring Executive Functions in Healthy Young Men. Behavioral Neuroscience, 123(5), 1066–1075. https://doi.org/10.1037/a0016980 Schwabe, L., & Wolf, O. T. (2010). Emotional Modulation of the Attentional Blink: Is There an Effect of Stress? Emotion, 10(2), 283–288. https://doi.org/10.1037/a0017751 Shabani, M., & Marelli, L. (2019). Re‐Identifiability of Genomic Data and the GDPR: Assessing the Re‐identifiability of Genomic Data in Light of the EU General Data Protection Regulation. EMBO Reports, 20(6), e48316. https://doi.org/10.15252/embr.201948316 Shackman, A. J., Maxwell, J. S., McMenamin, B. W., Greischar, L. L., & Davidson, R. J. (2011). Stress Potentiates Early and Attenuates Late Stages of Visual Processing. The Journal of Neuroscience, 31(3), 1156–1161. https://doi.org/10.1523/jneurosci.3384-10.2011 Shimazu, A., & Schaufeli, W. B. (2007). Does Distraction Facilitate Problem-Focused Coping with Job Stress? A 1 year Longitudinal Study. Journal of Behavioral Medicine, 30(5), 423–434. Shiv, B., & Fedorikhin, A. (1999). Heart and Mind in Conflict: The Interplay of Affect and Cognition in Consumer Decision Making. Journal of Consumer Research, 26(3), 278–292. https://doi.org/10.1086/209563 Shu, Q., Tu, Q., & Wang, K. (2011). The Impact of Computer Self-Efficacy and Technology Dependence on Computer-Related Technostress: A Social Cognitive Theory Perspective. International Journal of Human-Computer Interaction, 27(10), 923–939. https://doi.org/10.1080/10447318.2011.555313 Sokol, K., & Flach, P. (2021). Explainability Is in the Mind of the Beholder: Establishing the Foundations of Explainable Artificial Intelligence (arXiv preprint arXiv:2112.14466). https://doi.org/10.48550/ARXIV.2112.14466 Sultana, T., & Nemati, H. R. (2021). Impact of Explainable AI and Task Complexity on Human-Machine Symbiosis. In Proceeding of the 27th Americas Conference on Information Systems (AMCIS 2021), 1715. Susanti, V. D., Dwijanto, D., & Mariani, S. (2021). Working Memory Dalam Pembelajaran Matematika: Sebuah Kajian Teori. Prima Magistra: Jurnal Ilmiah Kependidikan, 3(1), 62–70. Swann, W. B., Hixon, J. G., Stein-Seroussi, A., & Gilbert, D. T. (1990). The Fleeting Gleam of Praise: Cognitive Processes Underlying Behavioral Reactions to Self-relevant Feedback. Journal of Personality and Social Psychology, 59(1), 17–26. https://doi.org/10.1037/0022-3514.59.1.17 Sweller, J. (1988). Cognitive Load During Problem Solving: Effects on Learning. Cognitive Science, 12(2), 257–285. https://doi.org/10.1207/s15516709cog1202_4 Sweller, J. (1994). Cognitive Load Theory, Learning Difficulty, and Instructional Design. Learning and Instruction, 4(4), 295–312. https://doi.org/10.1016/0959-4752(94)90003-5 Szalma, J. L., Warm, J. S., Matthews, G., Dember, W. N., Weiler, E. M., Meier, A., Eggemeier, F. T., & Eggemeier, F. T. (2004). Effects of Sensory Modality and Task Duration on Performance, Workload, and Stress in Sustained Attention. Human Factors, 46(2), 219–233. https://doi.org/10.1518/hfes.46.2.219.37334 Tarafdar, M., Tu, Q., Ragu-Nathan, B. S., Ragu-Nathan, T. S., & Ragu-Nathan, T. S. (2007). The Impact of Technostress on Role Stress and Productivity. Journal of Management Information Systems, 24(1), 301–328. https://doi.org/10.2753/mis0742-1222240109 Tecce, J. J., Savignano-Bowman, J., & Meinbresse, D. (1976). Contingent Negative Variation and the Distraction—Arousal Hypothesis. Electroencephalography and Clinical Neurophysiology, 41(3), 277–286. https://doi.org/10.1016/0013-4694(76)90120-6 Tjoa, E., & Guan, C. (2020). A Survey on Explainable Artificial Intelligence (XAI): Toward Medical XAI. IEEE Transactions on Neural Networks, 32(11), 4793–4813. https://doi.org/10.1109/tnnls.2020.3027314 Travis, L. E. (1925). The Effect of a Small Audience Upon Eye-hand Coordination. The Journal of Abnormal and Social Psychology, 20(2), 142–146. https://doi.org/10.1037/h0071311 Tremblay, M., Rethans, J., & Dolmans, D. (2022). Task Complexity and Cognitive Load in Simulationbased Education: A Randomised Trial. Medical Education, 57(2), 161–169. Tu, Q., Wang, K., & Shu, Q. (2005). Computer-Related Technostress in China. Communications of The ACM, 48(4), 77–81. https://doi.org/10.1145/1053291.1053323 Uddin, M. J., Ferdous, M., Rahaman, A., & Ahmad, S. (2023). Mapping of Technostress Research Trends: A Bibliometric Analysis. In Proceeding of the 7th International Conference on Intelligent Computing and Control Systems (ICICCS 2023), 938–943. IEEE. https://doi.org/10.1109/iciccs56967.2023.10142487 Uehara, E., & Landeira-Fernandez, J. (2010). Um Panorama Sobre O Desenvolvimento Da Memória De Trabalho E Seus Prejuízos No Aprendizado Escolar. Ciências & Cognição, 15(2), 31–41. Van Merriënboer, J. J., & Sweller, J. S. (2010). Cognitive Load Theory in Health Professional Education: Design Principles and Strategies. Medical Education, 44(1), 85–93. Vignesh, M., & Thanesh, K. (2020). A Review on Artificial Intelligence (AI) is Future Generation. International Journal of Engineering Research & Technology (IJERT) Eclectic, 8(7). Vilone, G., & Longo, L. (2021). Classification of Explainable Artificial Intelligence Methods through Their Output Formats. Machine Learning and Knowledge Extraction, 3(3), 615–661. https://doi.org/10.3390/make3030032 Wang, D., Yang, Q., Abdul, A., & Lim, B. Y. (2019). Designing Theory-Driven User-Centric Explainable AI. In Proceeding of the 2019 CHI Conference on Human Factors in Computing Systems, 1–15. https://doi.org/10.1145/3290605.3300831 Wang, K., Shu, Q., & Tu, Q. (2008). Technostress Under Different Organizational Environments: An Empirical Investigation. Computers in Human Behavior, 24(6), 3002–3013. https://doi.org/10.1016/j.chb.2008.05.007 Ward, A., Vickers, Z. M., & Mann, T. (2000). Don’t Mind if I Do: Disinhibited Eating Under Cognitive Load. Journal of Personality and Social Psychology, 78(4), 753–763. https://doi.org/10.1037/0022-3514.78.4.753 Waugh, C. E., Shing, E. Z., & Furr, R. M. (2020). Not All Disengagement Coping Strategies Are Created Equal: Positive Distraction, but Not Avoidance, Can Be an Adaptive Coping Strategy for Chronic Life Stressors. Anxiety, Stress, & Coping, 33(5), 511–529. https://doi.org/10.1080/10615806.2020.1755820 Weil, M. M., & Rosen, L. D. (1997). TechnoStress: Coping with Technology @Work @Home @Play. J. Wiley. Yadav, P., Yadav, A., & Agrawal, R. (2022). Use of Artificial Intelligence in the Real World. International Journal of Computer Science and Mobile Computing, 11(12), 83–90. https://doi.org/10.47760/ijcsmc.2022.v11i12.008 Yaverbaum, G. J. (1988). Critical Factors in the User Environment: An Experimental Study of Users, Organizations and Tasks. Management Information Systems Quarterly, 12(1), 75–88. https://doi.org/10.2307/248807 Zajonc, R. B. (1965). Social Facilitation: A Solution is Suggested for an Old Unresolved Social Psychological Problem. Science, 149(3681), 269–274. https://doi.org/10.1126/science.149.3681.269 Zevenbergen, B., Woodruff, A., & Kelley, P. G. (2020). Explainability Case Studies (arXiv preprint arXiv:2009.00246). https://doi.org/10.48550/ARXIV.2009.00246 Zhang, K., & Aslan, A. B. (2021). AI Technologies for Education: Recent Research & Future Directions. Computers and Education: Artificial Intelligence, 2, 100025. https://doi.org/10.1016/j.caeai.2021.100025 Zhou, S.-J. (2004). Working Memory in Learning Disabled Children. Chinese Journal of Clinical Psychology, 12(3), 313–317. Zielonka, J. T. (2022). The Impact of Trust in Technology on the Appraisal of Technostress Creators in a Work-Related Context. In Proceeding of the 55th Hawaii International Conference on System Sciences. https://doi.org/10.24251/hicss.2022.715zh_TW