Publications-Theses
Article View/Open
Publication Export
-
Google ScholarTM
NCCU Library
Citation Infomation
Related Publications in TAIR
題名 使用SHAP加強基於卷積神經網路的跌倒預防系統的解釋性以提高患者安全性
Enhancing Explainability in Convolutional Neural Network-Based Fall Prevention System for Patient Safety: A SHAP-Based Approach作者 吳伊雯
Wu, Yi-Wen貢獻者 張欣綠
Chang, Hsin-Lu
吳伊雯
Wu, Yi-Wen關鍵詞 AI跌倒偵測系統
可解釋性AI
使用者信任
使用者行為
錯誤信號
人機互動
AI-based Fall Detection
Explainable AI (XAI)
SHapley Additive exPlanations (SHAP)
User Trust
User Behavior
False Alarms
Explainability
Human-Machine Interaction日期 2024 上傳時間 4-Sep-2024 14:03:43 (UTC+8) 摘要 本研究調查了錯誤信號如何影響用戶對基於人工智能的防跌倒系統的信任和行為,重點關注錯誤報警和漏報等不準確情況對情感和行為反應的影響。這些錯誤可能會降低信任並阻礙技術的採用。我們研究了可解釋人工智能(XAI),特別是SHapley Additive exPlanations(SHAP),在提高用戶信任和系統理解方面的作用。我們的假設是,通過XAI提供清晰且全面的系統輸出解釋,能夠顯著提升用戶對該技術的態度。 我們的研究結果揭示了錯誤率與用戶信任和滿意度之間的複雜動態。更高的錯誤率大幅降低了信任,尤其是在使用XAI的情況下。然而,錯誤率對信任的直接影響很小,其通過提高系統性能和感知可靠性所帶來的間接影響顯著增強了信任。解釋滿意度和解釋全面性模型顯示完全中介作用,表明模型對信任的影響主要是通過增強的解釋。 這項研究強調了在促進信任方面有效錯誤管理和清晰解釋的重要性,尤其是在防跌倒等關鍵應用中。我們的研究為如何提高人工智能系統的透明度和可解釋性提供了見解,為開發者提供了寶貴的指導,以改進用戶對人工智能技術的接受度和信任度。
This research investigates how error signals influence user trust and behavior in AI-based fall prevention systems, focusing on the emotional and behavioral responses to inaccuracies like false alarms and negatives. These errors potentially decrease trust and hinder technology adoption. We examine the role of Explainable AI (XAI), specifically SHapley Additive exPlanations (SHAP), in improving user trust and system comprehension. Our hypothesis is that clear and comprehensive explanations of system outputs via XAI significantly enhance user attitudes towards the technology. Our findings reveal complex dynamics between error rates and their effects on user trust and satisfaction. Higher error rates drastically reduce trust, especially when compounded by the use of XAI. However, the direct impact of error rates on trust is minimal, with their indirect effects through improved system performance and perceived reliability significantly enhancing trust. Both the Explanation Satisfaction and Comprehensiveness of Explanation models demonstrate full mediation, indicating that the impact of the model on trust is primarily through enhanced explanations. The study underscores the importance of effective error management and clear explanations in fostering trust, particularly in critical applications like fall prevention. Our research contributes insights on increasing transparency and explainability in AI systems, providing valuable guidance for developers to improve user receptiveness and trust in AI technologies.參考文獻 Abbate, S., Avvenuti, M., Bonatesta, F., Cola, G., Corsini, P., & Vecchio, A. (2012). A smartphone-based fall detection system. Pervasive and Mobile Computing, 8(6), 883-899. Adadi, A., & Berrada, M. (2018). Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE access, 6, 52138-52160. Adams, B. D., Bruyn, L. E., Houde, S., Angelopoulos, P., Iwasa-Madge, K., & McCann, C. (2003). Trust in automated systems. Ministry of National Defence. Anjomshoae, S., Najjar, A., Calvaresi, D., & Främling, K. (2019). Explainable agents and robots: Results from a systematic literature review. Paper presented at the 18th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2019), Montreal, Canada, May 13–17, 2019. Arrieta, A. B., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., . . . Benjamins, R. (2020). Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information fusion, 58, 82-115. Belle, V., & Papantonis, I. (2021). Principles and practice of explainable machine learning. Frontiers in big Data, 39. Breznitz, S. (2013). Cry wolf: The psychology of false alarms: Psychology Press. Bright, T. J., Wong, A., Dhurjati, R., Bristow, E., Bastian, L., Coeytaux, R. R., . . . Musty, M. D. (2012). Effect of clinical decision-support systems: a systematic review. Annals of internal medicine, 157(1), 29-43. Cahour, B., & Forzy, J.-F. (2009). Does projection into use improve trust and exploration? An example with a cruise control system. Safety science, 47(9), 1260-1270. Casalicchio, G., Molnar, C., & Bischl, B. (2019). Visualizing the feature importance for black box models. Paper presented at the Machine Learning and Knowledge Discovery in Databases: European Conference, ECML PKDD 2018, Dublin, Ireland, September 10–14, 2018, Proceedings, Part I 18. De Miguel, K., Brunete, A., Hernando, M., & Gambao, E. (2017). Home camera-based fall detection system for the elderly. Sensors, 17(12), 2864. Dignum, V. (2019). Responsible artificial intelligence: how to develop and use AI in a responsible way (Vol. 2156): Springer. Doshi-Velez, F., & Kim, B. (2017). Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608. Edgcomb, A., & Vahid, F. (2012). Privacy perception and fall detection accuracy for in-home video assistive monitoring with privacy enhancements. ACM SIGHIT Record, 2(2), 6-15. García, E., Villar, M., Fáñez, M., Villar, J. R., de la Cal, E., & Cho, S.-B. (2022). Towards effective detection of elderly falls with CNN-LSTM neural networks. Neurocomputing, 500, 231-240. Gilpin, L. H., Bau, D., Yuan, B. Z., Bajwa, A., Specter, M., & Kagal, L. (2018). Explaining explanations: An overview of interpretability of machine learning. Paper presented at the 2018 IEEE 5th International Conference on data science and advanced analytics (DSAA). Girshick, R., Donahue, J., Darrell, T., & Malik, J. (2014). Rich feature hierarchies for accurate object detection and semantic segmentation. Paper presented at the Proceedings of the IEEE conference on computer vision and pattern recognition. Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., & Pedreschi, D. (2018). A survey of methods for explaining black box models. ACM computing surveys (CSUR), 51(5), 1-42. Gunning, D. (2017). Explainable artificial intelligence (xai). Defense advanced research projects agency (DARPA), nd Web, 2(2), 1. Hayes, A. F. (2017). Introduction to mediation, moderation, and conditional process analysis: A regression-based approach: Guilford publications. Herm, L.-V. (2023). Impact Of Explainable AI On Cognitive Load: Insights From An Empirical Study. arXiv preprint arXiv:2304.08861. Ho, C.-Y., Lai, Y.-C., Chen, I.-W., Wang, F.-Y., & Tai, W.-H. (2012). Statistical analysis of false positives and false negatives from real traffic with intrusion detection/prevention systems. IEEE Communications Magazine, 50(3), 146-154. Hoffman, R. R., Mueller, S. T., Klein, G., & Litman, J. (2023). Measures for explainable AI: Explanation goodness, user satisfaction, mental models, curiosity, trust, and human-AI performance. Frontiers in Computer Science, 5, 1096257. Igual, R., Medrano, C., & Plaza, I. (2013). Challenges, issues and trends in fall detection systems. Biomedical engineering online, 12(1), 66. Ikeda, T., Cooray, U., Hariyama, M., Aida, J., Kondo, K., Murakami, M., & Osaka, K. (2022). An interpretable machine learning approach to predict fall risk among community-dwelling older adults: a three-year longitudinal study. Journal of General Internal Medicine, 37(11), 2727-2735. Karran, A. J., Demazure, T., Hudon, A., Senecal, S., & Léger, P.-M. (2022). Designing for Confidence: The Impact of Visualizing Artificial Intelligence Decisions. Frontiers in neuroscience, 16, 883385. Kawamoto, K., Houlihan, C. A., Balas, E. A., & Lobach, D. F. (2005). Improving clinical practice using clinical. Kim, J.-K., Bae, M.-N., Lee, K., Kim, J.-C., & Hong, S. G. (2022). Explainable artificial intelligence and wearable sensor-based gait analysis to identify patients with osteopenia and sarcopenia in daily life. Biosensors, 12(3), 167. Kim, J.-K., Oh, D.-S., Lee, K., & Hong, S. G. (2022). Fall detection based on interpretation of important features with wrist-wearable sensors. Paper presented at the Proceedings of the 28th Annual International Conference on Mobile Computing And Networking. Laato, S., Tiainen, M., Najmul Islam, A., & Mäntymäki, M. (2022). How to explain AI systems to end users: a systematic literature review and research agenda. Internet Research, 32(7), 1-31. Lalmas, M., O'Brien, H., & Yom-Tov, E. (2022). Measuring user engagement: Springer Nature. Linardatos, P., Papastefanopoulos, V., & Kotsiantis, S. (2020). Explainable ai: A review of machine learning interpretability methods. Entropy, 23(1), 18. Liu, Y., Liu, Z., Luo, X., & Zhao, H. (2022). Diagnosis of Parkinson's disease based on SHAP value feature selection. Biocybernetics and Biomedical Engineering, 42(3), 856-869. Lundberg, S. M., & Lee, S.-I. (2017). A unified approach to interpreting model predictions. Advances in neural information processing systems, 30. Madsen, M., & Gregor, S. (2000). Measuring human-computer trust. Paper presented at the 11th australasian conference on information systems. Mankodiya, H., Jadav, D., Gupta, R., Tanwar, S., Alharbi, A., Tolba, A., . . . Raboaca, M. S. (2022). XAI-Fall: Explainable AI for Fall Detection on Wearable Devices Using Sequence Models and XAI Techniques. Mathematics, 10(12), 1990. Marcílio, W. E., & Eler, D. M. (2020). From explanations to feature selection: assessing SHAP values as feature selection mechanism. Paper presented at the 2020 33rd SIBGRAPI conference on Graphics, Patterns and Images (SIBGRAPI). Mastorakis, G., & Makris, D. (2014). Fall detection system using Kinect’s infrared sensor. Journal of Real-Time Image Processing, 9, 635-646. Ngo, T., Kunkel, J., & Ziegler, J. (2020). Exploring mental models for transparent and controllable recommender systems: a qualitative study. Paper presented at the Proceedings of the 28th ACM Conference on User Modeling, Adaptation and Personalization. Noury, N., Fleury, A., Rumeau, P., Bourke, A. K., Laighin, G., Rialle, V., & Lundy, J.-E. (2007). Fall detection-principles and methods. Paper presented at the 2007 29th annual international conference of the IEEE engineering in medicine and biology society. Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). " Why should i trust you?" Explaining the predictions of any classifier. Paper presented at the Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. Ripberger, J. T., Silva, C. L., Jenkins‐Smith, H. C., Carlson, D. E., James, M., & Herron, K. G. (2015). False alarms and missed events: The impact and origins of perceived inaccuracy in tornado warning systems. Risk analysis, 35(1), 44-56. Schroff, F., Kalenichenko, D., & Philbin, J. (2015). Facenet: A unified embedding for face recognition and clustering. Paper presented at the Proceedings of the IEEE conference on computer vision and pattern recognition. Sourour, M., Adel, B., & Tarek, A. (2009). Environmental awareness intrusion detection and prevention system toward reducing false positives and false negatives. Paper presented at the 2009 IEEE Symposium on Computational Intelligence in Cyber Security. Tang, Y. T., & Romero-Ortuno, R. (2022). Using explainable AI (XAI) for the prediction of falls in the older population. Algorithms, 15(10), 353. Thapa, R., Garikipati, A., Shokouhi, S., Hurtado, M., Barnes, G., Hoffman, J., . . . Das, R. (2022). Predicting falls in long-term care facilities: machine learning study. JMIR aging, 5(2), e35373. van der Waa, J., Nieuwburg, E., Cremers, A., & Neerincx, M. (2021). Evaluating XAI: A comparison of rule-based and example-based explanations. Artificial Intelligence, 291, 103404. Van Lent, M., Fisher, W., & Mancuso, M. (2004). An explainable artificial intelligence system for small-unit tactical behavior. Paper presented at the Proceedings of the national conference on artificial intelligence. Yagoda, R. E., & Gillan, D. J. (2012). You want me to trust a ROBOT? The development of a human–robot interaction trust scale. International Journal of Social Robotics, 4, 235-248. Zhang, C., Tian, Y., & Capezuti, E. (2012). Privacy preserving automatic fall detection for elderly using RGBD cameras. Paper presented at the Computers Helping People with Special Needs: 13th International Conference, ICCHP 2012, Linz, Austria, July 11-13, 2012, Proceedings, Part I 13. Zou, L., Xia, L., Ding, Z., Song, J., Liu, W., & Yin, D. (2019). Reinforcement learning to optimize long-term user engagement in recommender systems. Paper presented at the Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 描述 碩士
國立政治大學
資訊管理學系
111356016資料來源 http://thesis.lib.nccu.edu.tw/record/#G0111356016 資料類型 thesis dc.contributor.advisor 張欣綠 zh_TW dc.contributor.advisor Chang, Hsin-Lu en_US dc.contributor.author (Authors) 吳伊雯 zh_TW dc.contributor.author (Authors) Wu, Yi-Wen en_US dc.creator (作者) 吳伊雯 zh_TW dc.creator (作者) Wu, Yi-Wen en_US dc.date (日期) 2024 en_US dc.date.accessioned 4-Sep-2024 14:03:43 (UTC+8) - dc.date.available 4-Sep-2024 14:03:43 (UTC+8) - dc.date.issued (上傳時間) 4-Sep-2024 14:03:43 (UTC+8) - dc.identifier (Other Identifiers) G0111356016 en_US dc.identifier.uri (URI) https://nccur.lib.nccu.edu.tw/handle/140.119/153150 - dc.description (描述) 碩士 zh_TW dc.description (描述) 國立政治大學 zh_TW dc.description (描述) 資訊管理學系 zh_TW dc.description (描述) 111356016 zh_TW dc.description.abstract (摘要) 本研究調查了錯誤信號如何影響用戶對基於人工智能的防跌倒系統的信任和行為,重點關注錯誤報警和漏報等不準確情況對情感和行為反應的影響。這些錯誤可能會降低信任並阻礙技術的採用。我們研究了可解釋人工智能(XAI),特別是SHapley Additive exPlanations(SHAP),在提高用戶信任和系統理解方面的作用。我們的假設是,通過XAI提供清晰且全面的系統輸出解釋,能夠顯著提升用戶對該技術的態度。 我們的研究結果揭示了錯誤率與用戶信任和滿意度之間的複雜動態。更高的錯誤率大幅降低了信任,尤其是在使用XAI的情況下。然而,錯誤率對信任的直接影響很小,其通過提高系統性能和感知可靠性所帶來的間接影響顯著增強了信任。解釋滿意度和解釋全面性模型顯示完全中介作用,表明模型對信任的影響主要是通過增強的解釋。 這項研究強調了在促進信任方面有效錯誤管理和清晰解釋的重要性,尤其是在防跌倒等關鍵應用中。我們的研究為如何提高人工智能系統的透明度和可解釋性提供了見解,為開發者提供了寶貴的指導,以改進用戶對人工智能技術的接受度和信任度。 zh_TW dc.description.abstract (摘要) This research investigates how error signals influence user trust and behavior in AI-based fall prevention systems, focusing on the emotional and behavioral responses to inaccuracies like false alarms and negatives. These errors potentially decrease trust and hinder technology adoption. We examine the role of Explainable AI (XAI), specifically SHapley Additive exPlanations (SHAP), in improving user trust and system comprehension. Our hypothesis is that clear and comprehensive explanations of system outputs via XAI significantly enhance user attitudes towards the technology. Our findings reveal complex dynamics between error rates and their effects on user trust and satisfaction. Higher error rates drastically reduce trust, especially when compounded by the use of XAI. However, the direct impact of error rates on trust is minimal, with their indirect effects through improved system performance and perceived reliability significantly enhancing trust. Both the Explanation Satisfaction and Comprehensiveness of Explanation models demonstrate full mediation, indicating that the impact of the model on trust is primarily through enhanced explanations. The study underscores the importance of effective error management and clear explanations in fostering trust, particularly in critical applications like fall prevention. Our research contributes insights on increasing transparency and explainability in AI systems, providing valuable guidance for developers to improve user receptiveness and trust in AI technologies. en_US dc.description.tableofcontents Chapter 1. Introduction 7 Chapter 2. Literature Review 11 2.1 XAI 11 2.2 XAI in Fall Prevention 14 2.3 Human-XAI Interaction 17 Chapter 3. Research Design 19 3.1 System Development 19 3.1.1 Data Collection 19 3.1.2 Explainable AI Development 20 3.1.3 Human-XAI Interaction 21 3.2 Experimental procedure 22 3.3 Experiment Process 23 3.4 Questionnaire Measurement 25 3.4.1 Trust 25 3.4.2 Satisfaction of Explanation 26 3.4.3 Comprehensiveness Test 27 3.5 Data collection 28 3.6 Construct Validation 29 3.6.1 Kaiser-Meyer-Olkin (KMO) Test 30 3.6.2 Confirmatory Factor Analysis (CFA) 32 Chapter 4. Experimental Results 35 4.1 Trust Model 36 4.2 Explanation Satisfaction Model 38 4.3 Comprehensiveness of Explanation Model 39 Chapter 5. Discussion 42 Chapter 6. Conclusion 43 References 46 zh_TW dc.format.extent 1518116 bytes - dc.format.mimetype application/pdf - dc.source.uri (資料來源) http://thesis.lib.nccu.edu.tw/record/#G0111356016 en_US dc.subject (關鍵詞) AI跌倒偵測系統 zh_TW dc.subject (關鍵詞) 可解釋性AI zh_TW dc.subject (關鍵詞) 使用者信任 zh_TW dc.subject (關鍵詞) 使用者行為 zh_TW dc.subject (關鍵詞) 錯誤信號 zh_TW dc.subject (關鍵詞) 人機互動 zh_TW dc.subject (關鍵詞) AI-based Fall Detection en_US dc.subject (關鍵詞) Explainable AI (XAI) en_US dc.subject (關鍵詞) SHapley Additive exPlanations (SHAP) en_US dc.subject (關鍵詞) User Trust en_US dc.subject (關鍵詞) User Behavior en_US dc.subject (關鍵詞) False Alarms en_US dc.subject (關鍵詞) Explainability en_US dc.subject (關鍵詞) Human-Machine Interaction en_US dc.title (題名) 使用SHAP加強基於卷積神經網路的跌倒預防系統的解釋性以提高患者安全性 zh_TW dc.title (題名) Enhancing Explainability in Convolutional Neural Network-Based Fall Prevention System for Patient Safety: A SHAP-Based Approach en_US dc.type (資料類型) thesis en_US dc.relation.reference (參考文獻) Abbate, S., Avvenuti, M., Bonatesta, F., Cola, G., Corsini, P., & Vecchio, A. (2012). A smartphone-based fall detection system. Pervasive and Mobile Computing, 8(6), 883-899. Adadi, A., & Berrada, M. (2018). Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE access, 6, 52138-52160. Adams, B. D., Bruyn, L. E., Houde, S., Angelopoulos, P., Iwasa-Madge, K., & McCann, C. (2003). Trust in automated systems. Ministry of National Defence. Anjomshoae, S., Najjar, A., Calvaresi, D., & Främling, K. (2019). Explainable agents and robots: Results from a systematic literature review. Paper presented at the 18th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2019), Montreal, Canada, May 13–17, 2019. Arrieta, A. B., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., . . . Benjamins, R. (2020). Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information fusion, 58, 82-115. Belle, V., & Papantonis, I. (2021). Principles and practice of explainable machine learning. Frontiers in big Data, 39. Breznitz, S. (2013). Cry wolf: The psychology of false alarms: Psychology Press. Bright, T. J., Wong, A., Dhurjati, R., Bristow, E., Bastian, L., Coeytaux, R. R., . . . Musty, M. D. (2012). Effect of clinical decision-support systems: a systematic review. Annals of internal medicine, 157(1), 29-43. Cahour, B., & Forzy, J.-F. (2009). Does projection into use improve trust and exploration? An example with a cruise control system. Safety science, 47(9), 1260-1270. Casalicchio, G., Molnar, C., & Bischl, B. (2019). Visualizing the feature importance for black box models. Paper presented at the Machine Learning and Knowledge Discovery in Databases: European Conference, ECML PKDD 2018, Dublin, Ireland, September 10–14, 2018, Proceedings, Part I 18. De Miguel, K., Brunete, A., Hernando, M., & Gambao, E. (2017). Home camera-based fall detection system for the elderly. Sensors, 17(12), 2864. Dignum, V. (2019). Responsible artificial intelligence: how to develop and use AI in a responsible way (Vol. 2156): Springer. Doshi-Velez, F., & Kim, B. (2017). Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608. Edgcomb, A., & Vahid, F. (2012). Privacy perception and fall detection accuracy for in-home video assistive monitoring with privacy enhancements. ACM SIGHIT Record, 2(2), 6-15. García, E., Villar, M., Fáñez, M., Villar, J. R., de la Cal, E., & Cho, S.-B. (2022). Towards effective detection of elderly falls with CNN-LSTM neural networks. Neurocomputing, 500, 231-240. Gilpin, L. H., Bau, D., Yuan, B. Z., Bajwa, A., Specter, M., & Kagal, L. (2018). Explaining explanations: An overview of interpretability of machine learning. Paper presented at the 2018 IEEE 5th International Conference on data science and advanced analytics (DSAA). Girshick, R., Donahue, J., Darrell, T., & Malik, J. (2014). Rich feature hierarchies for accurate object detection and semantic segmentation. Paper presented at the Proceedings of the IEEE conference on computer vision and pattern recognition. Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., & Pedreschi, D. (2018). A survey of methods for explaining black box models. ACM computing surveys (CSUR), 51(5), 1-42. Gunning, D. (2017). Explainable artificial intelligence (xai). Defense advanced research projects agency (DARPA), nd Web, 2(2), 1. Hayes, A. F. (2017). Introduction to mediation, moderation, and conditional process analysis: A regression-based approach: Guilford publications. Herm, L.-V. (2023). Impact Of Explainable AI On Cognitive Load: Insights From An Empirical Study. arXiv preprint arXiv:2304.08861. Ho, C.-Y., Lai, Y.-C., Chen, I.-W., Wang, F.-Y., & Tai, W.-H. (2012). Statistical analysis of false positives and false negatives from real traffic with intrusion detection/prevention systems. IEEE Communications Magazine, 50(3), 146-154. Hoffman, R. R., Mueller, S. T., Klein, G., & Litman, J. (2023). Measures for explainable AI: Explanation goodness, user satisfaction, mental models, curiosity, trust, and human-AI performance. Frontiers in Computer Science, 5, 1096257. Igual, R., Medrano, C., & Plaza, I. (2013). Challenges, issues and trends in fall detection systems. Biomedical engineering online, 12(1), 66. Ikeda, T., Cooray, U., Hariyama, M., Aida, J., Kondo, K., Murakami, M., & Osaka, K. (2022). An interpretable machine learning approach to predict fall risk among community-dwelling older adults: a three-year longitudinal study. Journal of General Internal Medicine, 37(11), 2727-2735. Karran, A. J., Demazure, T., Hudon, A., Senecal, S., & Léger, P.-M. (2022). Designing for Confidence: The Impact of Visualizing Artificial Intelligence Decisions. Frontiers in neuroscience, 16, 883385. Kawamoto, K., Houlihan, C. A., Balas, E. A., & Lobach, D. F. (2005). Improving clinical practice using clinical. Kim, J.-K., Bae, M.-N., Lee, K., Kim, J.-C., & Hong, S. G. (2022). Explainable artificial intelligence and wearable sensor-based gait analysis to identify patients with osteopenia and sarcopenia in daily life. Biosensors, 12(3), 167. Kim, J.-K., Oh, D.-S., Lee, K., & Hong, S. G. (2022). Fall detection based on interpretation of important features with wrist-wearable sensors. Paper presented at the Proceedings of the 28th Annual International Conference on Mobile Computing And Networking. Laato, S., Tiainen, M., Najmul Islam, A., & Mäntymäki, M. (2022). How to explain AI systems to end users: a systematic literature review and research agenda. Internet Research, 32(7), 1-31. Lalmas, M., O'Brien, H., & Yom-Tov, E. (2022). Measuring user engagement: Springer Nature. Linardatos, P., Papastefanopoulos, V., & Kotsiantis, S. (2020). Explainable ai: A review of machine learning interpretability methods. Entropy, 23(1), 18. Liu, Y., Liu, Z., Luo, X., & Zhao, H. (2022). Diagnosis of Parkinson's disease based on SHAP value feature selection. Biocybernetics and Biomedical Engineering, 42(3), 856-869. Lundberg, S. M., & Lee, S.-I. (2017). A unified approach to interpreting model predictions. Advances in neural information processing systems, 30. Madsen, M., & Gregor, S. (2000). Measuring human-computer trust. Paper presented at the 11th australasian conference on information systems. Mankodiya, H., Jadav, D., Gupta, R., Tanwar, S., Alharbi, A., Tolba, A., . . . Raboaca, M. S. (2022). XAI-Fall: Explainable AI for Fall Detection on Wearable Devices Using Sequence Models and XAI Techniques. Mathematics, 10(12), 1990. Marcílio, W. E., & Eler, D. M. (2020). From explanations to feature selection: assessing SHAP values as feature selection mechanism. Paper presented at the 2020 33rd SIBGRAPI conference on Graphics, Patterns and Images (SIBGRAPI). Mastorakis, G., & Makris, D. (2014). Fall detection system using Kinect’s infrared sensor. Journal of Real-Time Image Processing, 9, 635-646. Ngo, T., Kunkel, J., & Ziegler, J. (2020). Exploring mental models for transparent and controllable recommender systems: a qualitative study. Paper presented at the Proceedings of the 28th ACM Conference on User Modeling, Adaptation and Personalization. Noury, N., Fleury, A., Rumeau, P., Bourke, A. K., Laighin, G., Rialle, V., & Lundy, J.-E. (2007). Fall detection-principles and methods. Paper presented at the 2007 29th annual international conference of the IEEE engineering in medicine and biology society. Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). " Why should i trust you?" Explaining the predictions of any classifier. Paper presented at the Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. Ripberger, J. T., Silva, C. L., Jenkins‐Smith, H. C., Carlson, D. E., James, M., & Herron, K. G. (2015). False alarms and missed events: The impact and origins of perceived inaccuracy in tornado warning systems. Risk analysis, 35(1), 44-56. Schroff, F., Kalenichenko, D., & Philbin, J. (2015). Facenet: A unified embedding for face recognition and clustering. Paper presented at the Proceedings of the IEEE conference on computer vision and pattern recognition. Sourour, M., Adel, B., & Tarek, A. (2009). Environmental awareness intrusion detection and prevention system toward reducing false positives and false negatives. Paper presented at the 2009 IEEE Symposium on Computational Intelligence in Cyber Security. Tang, Y. T., & Romero-Ortuno, R. (2022). Using explainable AI (XAI) for the prediction of falls in the older population. Algorithms, 15(10), 353. Thapa, R., Garikipati, A., Shokouhi, S., Hurtado, M., Barnes, G., Hoffman, J., . . . Das, R. (2022). Predicting falls in long-term care facilities: machine learning study. JMIR aging, 5(2), e35373. van der Waa, J., Nieuwburg, E., Cremers, A., & Neerincx, M. (2021). Evaluating XAI: A comparison of rule-based and example-based explanations. Artificial Intelligence, 291, 103404. Van Lent, M., Fisher, W., & Mancuso, M. (2004). An explainable artificial intelligence system for small-unit tactical behavior. Paper presented at the Proceedings of the national conference on artificial intelligence. Yagoda, R. E., & Gillan, D. J. (2012). You want me to trust a ROBOT? The development of a human–robot interaction trust scale. International Journal of Social Robotics, 4, 235-248. Zhang, C., Tian, Y., & Capezuti, E. (2012). Privacy preserving automatic fall detection for elderly using RGBD cameras. Paper presented at the Computers Helping People with Special Needs: 13th International Conference, ICCHP 2012, Linz, Austria, July 11-13, 2012, Proceedings, Part I 13. Zou, L., Xia, L., Ding, Z., Song, J., Liu, W., & Yin, D. (2019). Reinforcement learning to optimize long-term user engagement in recommender systems. Paper presented at the Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. zh_TW