學術產出-會議論文

文章檢視/開啟

書目匯出

Google ScholarTM

政大圖書館

引文資訊

TAIR相關學術產出

題名 Building XAI for Fall Prevention System: An Evaluation of the Potential of LIME and SHAP in Bridging Trust Gap
作者 張欣綠
Chang, Hsin-Lu;Hou, Yun Ti
貢獻者 資管系
日期 2024-07
上傳時間 15-十一月-2024 10:26:48 (UTC+8)
摘要 Nowadays AI has become a popular research topic and has rapidly developed in the medical field. Despite AI becoming increasingly powerful and multifunctional, it has led to the problem of the 'black box', creating a trust gap between AI and its users. This gap could hinder the adoption of AI methods and decrease the development of AI in the medical sector. The emergence of XAI (Explainable AI) offers a potential solution by explaining the predictive results of models, which may increase users' trust in these models. This study integrates the XAI algorithms LIME and SHAP into an AI-based fall prevention system. We designed an experiment to observe whether the integration of XAI into the fall prevention system impacts three metrics: user trust, satisfaction with explanations, and comprehension of explanations, as well as to assess how these effects might differ across models with varying accuracy levels.
關聯 Proceedings of Pacific-Asia Conference on Information Systems, AIS
資料類型 conference
dc.contributor 資管系
dc.creator (作者) 張欣綠
dc.creator (作者) Chang, Hsin-Lu;Hou, Yun Ti
dc.date (日期) 2024-07
dc.date.accessioned 15-十一月-2024 10:26:48 (UTC+8)-
dc.date.available 15-十一月-2024 10:26:48 (UTC+8)-
dc.date.issued (上傳時間) 15-十一月-2024 10:26:48 (UTC+8)-
dc.identifier.uri (URI) https://nccur.lib.nccu.edu.tw/handle/140.119/154339-
dc.description.abstract (摘要) Nowadays AI has become a popular research topic and has rapidly developed in the medical field. Despite AI becoming increasingly powerful and multifunctional, it has led to the problem of the 'black box', creating a trust gap between AI and its users. This gap could hinder the adoption of AI methods and decrease the development of AI in the medical sector. The emergence of XAI (Explainable AI) offers a potential solution by explaining the predictive results of models, which may increase users' trust in these models. This study integrates the XAI algorithms LIME and SHAP into an AI-based fall prevention system. We designed an experiment to observe whether the integration of XAI into the fall prevention system impacts three metrics: user trust, satisfaction with explanations, and comprehension of explanations, as well as to assess how these effects might differ across models with varying accuracy levels.
dc.format.extent 135 bytes-
dc.format.mimetype text/html-
dc.relation (關聯) Proceedings of Pacific-Asia Conference on Information Systems, AIS
dc.title (題名) Building XAI for Fall Prevention System: An Evaluation of the Potential of LIME and SHAP in Bridging Trust Gap
dc.type (資料類型) conference