Publications-Theses

Article View/Open

Publication Export

Google ScholarTM

NCCU Library

Citation Infomation

Related Publications in TAIR

題名 通過人在迴圈和可解釋性人工智慧提升推薦系統
Advancing Recommendation Systems through Human-in-the-Loop and Explainable AI
作者 陳彥維
Chen, Yen-Wei
貢獻者 簡士鎰<br>郁方
Chien, Shih-Yi<br>Yu, Fang
陳彥維
Chen, Yen-Wei
關鍵詞 推薦系統
人在迴圈
可解釋人工智慧
Recommendation System
Human in the loop
Explainable AI
日期 2024
上傳時間 4-Sep-2024 14:06:33 (UTC+8)
摘要 推薦系統已成為日常生活中的重要組成部分,然而人工智慧的引入卻帶來了“黑箱”挑戰。個性化也是一個關鍵問題。本研究旨在從人機互動的角度支持推薦系統,利用可解釋人工智慧來闡明人工智慧的決策過程,並採用基於情境感知的代理透明度模型來增強清晰度。此外,研究中還納入了“人在迴圈”的概念,結合人類反饋以顯著提高系統的滿意度和個性化。實證表明,將人工智慧和人在迴圈整合到推薦系統中,不僅可以提高推薦的信任度,也提高了用戶的使用意圖,並在透過持續的反饋,能夠在動態環境中保持優秀的性能。但在整合的同時,也要注意過度透明度所帶來的負面影響。
Recommendation systems have become an integral part of daily life, yet the introduction of AI presents “black box&quot; challenges. Personalization is also a critical issue. This study aims to support recommendation systems from a human-computer interaction perspective by utilizing Explainable AI (XAI) to elucidate AI's decision-making process and adopting an agent transparency model based on situational awareness to enhance clarity. Additionally, the study incorporates the concept of “humans in the loop&quot;, integrating human feedback to significantly improve system satisfaction and personalization. Empirical evidence indicates that integrating XAI and HITL into recommendation systems not only enhances the trustworthiness of recommendations but also increases users' intention to use the system. Through continuous feedback, the system can maintain excellent performance in dynamic environments. However, it is essential to be mindful of the potential negative impacts of excessive transparency during integration.
參考文獻 Amershi, S., Cakmak, M., Knox, W. B., & Kulesza, T. (2014). Power to the people: The role of humans in interactive machine learning. AI magazine, 35(4), 105–120. Bach, S., Binder, A., Montavon, G., Klauschen, F., Müller, K.-R., & Samek, W. (2015). On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PloS one, 10(7), e0130140. Bertrand, A., Belloum, R., Eagan, J. R., & Maxwell, W. (2022). How cognitive biases affect xai-assisted decision-making: A systematic review. Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society, 78–91. Chen, J. Y., Procci, K., Boyce, M., Wright, J., Garcia, A., & Barnes, M. (2014). Situation awareness-based agent transparency. US Army Research Laboratory, 1–29. Chien, S.-Y., Yang, C.-J., & Yu, F. (2022). Xflag: Explainable fake news detection model on social media. International Journal of Human–Computer Interaction, 38(18-20), 1808–1827. Cramer, H., Evers, V., Ramlal, S., Van Someren, M., Rutledge, L., Stash, N., Aroyo, L., & Wielinga, B. (2008). The effects of transparency on trust in and acceptance of a content-based art recommender. User Modeling and User-adapted interaction, 18, 455–496. Devlin, J., Chang, M.-W., Lee, K., & Toutanova, K. (2018). Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Endsley, M. R. (1995). Toward a theory of situation awareness in dynamic systems. Human factors, 37(1), 32–64. Flemisch, F., Heesen, M., Hesse, T., Kelsch, J., Schieben, A., & Beller, J. (2012). Towards a dynamic balance between humans and automation: Authority, ability, responsibility and control in shared and cooperative control situations. Cognition, Technology & Work, 14, 3–18. Gedikli, F., Jannach, D., & Ge, M. (2014). How should i explain? a comparison of different explanation types for recommender systems. International Journal of HumanComputer Studies, 72(4), 367–382. Gomez-Uribe, C. A., & Hunt, N. (2015). The netflix recommender system: Algorithms, business value, and innovation. ACM Transactions on Management Information Systems (TMIS), 6(4), 1–19. Grimmelikhuijsen, S. (2023). Explaining why the computer says no: Algorithmic transparency affects the perceived trustworthiness of automated decision-making. Public Administration Review, 83(2), 241–262. Gunning, D. (2017). Explainable artificial intelligence (xai). Defense advanced research projects agency (DARPA), nd Web, 2(2), 1. Hancock, B., Bordes, A., Mazare, P.-E., & Weston, J. (2019). Learning from dialogue after deployment: Feed yourself, chatbot! arXiv preprint arXiv:1901.05415. Harper, F. M., & Konstan, J. A. (2015). The movielens datasets: History and context. Acm transactions on interactive intelligent systems (tiis), 5(4), 1–19. Herlocker, J. L., Konstan, J. A., & Riedl, J. (2000). Explaining collaborative filtering recommendations. Proceedings of the 2000 ACM conference on Computer supported cooperative work, 241–250. Ho, S. Y., & Bodoff, D. (2014). The effects of web personalization on user attitude and behavior. MIS quarterly, 38(2), 497–A10. Kang, W.-C., & McAuley, J. (2018). Self-attentive sequential recommendation. 2018 IEEE international conference on data mining (ICDM), 197–206. Kizilcec, R. F. (2016). How much information? effects of transparency on trust in an algorithmic interface. Proceedings of the 2016 CHI conference on human factors in computing systems, 2390–2395. Knijnenburg, B. P., Willemsen, M. C., Gantner, Z., Soncu, H., & Newell, C. (2012). Explaining the user experience of recommender systems. User modeling and useradapted interaction, 22, 441–504. Kovashka, A., Parikh, D., & Grauman, K. (2015). Whittlesearch: Interactive image search with relative attribute feedback. International Journal of Computer Vision, 115, 185–210. Kulesza, T., Stumpf, S., Burnett, M., & Kwan, I. (2012). Tell me more? the effects of mental model soundness on personalizing an intelligent agent. Proceedings of the sigchi conference on human factors in computing systems, 1–10. Lee, J. D. (2012). Trust, trustworthiness, and trustability. Presentation at the Workshop on Human Machine Trust for Robust Autonomous Systems, 31. Li, J., Tu, Z., Yang, B., Lyu, M. R., & Zhang, T. (2018). Multi-head attention with disagreement regularization. arXiv preprint arXiv:1810.10183. Li, J., Ren, P., Chen, Z., Ren, Z., Lian, T., & Ma, J. (2017). Neural attentive session-based recommendation. Proceedings of the 2017 ACM on Conference on Information and Knowledge Management, 1419–1428. Li, J., Miller, A. H., Chopra, S., Ranzato, M., & Weston, J. (2016). Dialogue learning with human-in-the-loop. arXiv preprint arXiv:1611.09823. Liang, Krishnan, R. G., Hoffman, M. D., & Jebara, T. (2018). Variational autoencoders for collaborative filtering. Proceedings of the 2018 world wide web conference, 689–698. Liang, Lai, & Ku. (2006). Personalized content recommendation and user satisfaction: heoretical synthesis and empirical findings. Journal of Management Information Systems, 23(3), 45–70. Liu, Z., Guo, Y., & Mahmud, J. (2021). When and why does a model fail? a human-in-theloop error detection framework for sentiment analysis. arXiv preprint arXiv:2106.00954. Liu, Z., Wang, J., Gong, S., Lu, H., & Tao, D. (2019). Deep reinforcement active learning for human-in-the-loop person re-identification. Proceedings of the IEEE/CVF international conference on computer vision, 6122–6131. Lundberg, S. M., & Lee, S.-I. (2017). A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems, 4765–4774. https://arxiv.org/abs/1705.07874 Monarch, R. M. (2021). Human-in-the-loop machine learning: Active learning and annotation for human-centered ai. Simon; Schuster. Mosqueira-Rey, E., Hernández-Pereira, E., Alonso-Ríos, D., Bobes-Bascarán, J., & FernándezLeal, Á. (2023). Human-in-the-loop machine learning: A state of the art. Artificial Intelligence Review, 56(4), 3005–3054. Naiseh, M., Cemiloglu, D., Al Thani, D., Jiang, N., & Ali, R. (2021). Explainable recommendations and calibrated trust: Two systematic user errors. Computer, 54(10), 28–37. Nilashi, M., Jannach, D., bin Ibrahim, O., Esfahani, M. D., & Ahmadi, H. (2016). Recommendation quality, transparency, and website quality for trust-building in recommendation agents. Electronic Commerce Research and Applications, 19, 70– 84. Pu, P., Chen, L., & Hu, R. (2011). A user-centric evaluation framework for recommender systems. Proceedings of the fifth ACM conference on Recommender systems, 157– 164. Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al. (2018). Improving language understanding by generative pre-training. Rao, A. S., & Georgeff, M. P. (1995). Bdi agents: From theory to practice. Proceedings of the 1st International Conference on Multi-Agent Systems (ICMAS), 95, 312–319. Rendle, S., Freudenthaler, C., & Schmidt-Thieme, L. (2010). Factorizing personalized markov chains for next-basket recommendation. Proceedings of the 19th international conference on World wide web, 811–820. Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). ” why should i trust you?” explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, 1135–1144. Shin, D. (2021). The effects of explainability and causability on perception, trust, and acceptance: Implications for explainable ai.International Journal of Human-Computer Studies, 146, 102551. Simonson, I., & Tversky, A. (1992). Choice in context: Tradeoff contrast and extremeness aversion. Journal of marketing research, 29(3), 281–295. Springer, A., & Whittaker, S. (2019). Progressive disclosure: Empirically motivated approaches to designing effective ransparency. Proceedings of the 24th international conference on intelligent user interfaces, 107–120. Su, X., & Khoshgoftaar, T. M. (2009). A survey of collaborative filtering techniques. Advances in artificial intelligence, 2009. Sun, F., Liu, J., Wu, J., Pei, C., Lin, X., Ou, W., & Jiang, P. (2019). Bert4rec: Sequential recommendation with bidirectional encoder representations from transformer. Proceedings of the 28th ACM international conference on information and knowledge management, 1441–1450. Swartout, W., Paris, C., & Moore, J. (1991). Explanations in knowledge systems: Design for explainable expert systems. IEEE Expert, 6(3), 58–64. Tam, K. Y., & Ho, S. Y. (2006). Understanding the impact of web personalization on user information processing and decision outcomes. MIS quarterly, 865–890. Tang, J., & Wang, K. (2018). Personalized top-n sequential recommendation via convolutional sequence embedding. Proceedings of the eleventh ACM international conference on web search and data mining, 565–573. Tintarev, N., & Masthoff, J. (2012). Evaluating the effectiveness of explanations for recommender systems: Methodological issues and empirical studies on the impact of personalization. User Modeling and User-Adapted Interaction, 22, 399–439. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł., & Polosukhin, I. (2017). Attention is all you need. Advances in neural information processing systems, 30. Vig, J., Sen, S., & Riedl, J. (2009). Tagsplanations: Explaining recommendations using tags. Proceedings of the 14th international conference on Intelligent user interfaces, 47–56. Wright, J. L., Chen, J. Y., Barnes, M. J., & Boyce, M. W. (2015). The effects of information level on human-agent interaction for route planning. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 59(1), 811–815. Wu, X., Xiao, L., Sun, Y., Zhang, J., Ma, T., & He, L. (2022). A survey of human-inthe-loop for machine learning. Future Generation Computer Systems, 135, 364–381. Zhang, Y., & Chen, X. (2018). Explainable recommendation: A survey and new perspectives. corr abs/1804.11192 (2018). arXiv preprint arXiv:1804.11192.
描述 碩士
國立政治大學
資訊管理學系
111356046
資料來源 http://thesis.lib.nccu.edu.tw/record/#G0111356046
資料類型 thesis
dc.contributor.advisor 簡士鎰<br>郁方zh_TW
dc.contributor.advisor Chien, Shih-Yi<br>Yu, Fangen_US
dc.contributor.author (Authors) 陳彥維zh_TW
dc.contributor.author (Authors) Chen, Yen-Weien_US
dc.creator (作者) 陳彥維zh_TW
dc.creator (作者) Chen, Yen-Weien_US
dc.date (日期) 2024en_US
dc.date.accessioned 4-Sep-2024 14:06:33 (UTC+8)-
dc.date.available 4-Sep-2024 14:06:33 (UTC+8)-
dc.date.issued (上傳時間) 4-Sep-2024 14:06:33 (UTC+8)-
dc.identifier (Other Identifiers) G0111356046en_US
dc.identifier.uri (URI) https://nccur.lib.nccu.edu.tw/handle/140.119/153164-
dc.description (描述) 碩士zh_TW
dc.description (描述) 國立政治大學zh_TW
dc.description (描述) 資訊管理學系zh_TW
dc.description (描述) 111356046zh_TW
dc.description.abstract (摘要) 推薦系統已成為日常生活中的重要組成部分,然而人工智慧的引入卻帶來了“黑箱”挑戰。個性化也是一個關鍵問題。本研究旨在從人機互動的角度支持推薦系統,利用可解釋人工智慧來闡明人工智慧的決策過程,並採用基於情境感知的代理透明度模型來增強清晰度。此外,研究中還納入了“人在迴圈”的概念,結合人類反饋以顯著提高系統的滿意度和個性化。實證表明,將人工智慧和人在迴圈整合到推薦系統中,不僅可以提高推薦的信任度,也提高了用戶的使用意圖,並在透過持續的反饋,能夠在動態環境中保持優秀的性能。但在整合的同時,也要注意過度透明度所帶來的負面影響。zh_TW
dc.description.abstract (摘要) Recommendation systems have become an integral part of daily life, yet the introduction of AI presents “black box&quot; challenges. Personalization is also a critical issue. This study aims to support recommendation systems from a human-computer interaction perspective by utilizing Explainable AI (XAI) to elucidate AI's decision-making process and adopting an agent transparency model based on situational awareness to enhance clarity. Additionally, the study incorporates the concept of “humans in the loop&quot;, integrating human feedback to significantly improve system satisfaction and personalization. Empirical evidence indicates that integrating XAI and HITL into recommendation systems not only enhances the trustworthiness of recommendations but also increases users' intention to use the system. Through continuous feedback, the system can maintain excellent performance in dynamic environments. However, it is essential to be mindful of the potential negative impacts of excessive transparency during integration.en_US
dc.description.tableofcontents 1 Introduction 1 2 Related Work 6 2.1 Recommendation System 6 2.1.1 Collaborative filtering 6 2.1.2 Content-based 7 2.1.3 Comparison of CF and CB 7 2.1.4 Movie Recommendation System Review 8 2.1.5 Bert4Rec: Implementing the Recommender System 9 Embedding Layer 10 Transformer Layer 10 Output Layer 12 Training process and loss function of Bert4Rec 12 2.2 XAI 13 2.2.1 Overview XAI 13 2.2.2 Interpretable Models 13 2.2.3 Post-hoc Explainability 14 2.2.4 SHAP: Explaining Recommendations with XAI 14 2.3 Situation awareness-based agent transparency 16 2.4 Human in the loop 18 3 Methodology 22 3.1 SAT Mechanism: Enhancing Transparency 23 3.2 Human in the loop: Enhancing the Performance of Recommendation System 24 3.3 Evaluation Metrics 26 3.3.1 Subjective indicators 26 3.3.2 Objective indicators 29 4 Model Validation 30 4.1 Dataset 30 4.2 Experiment setup 31 4.3 Comparison and validation of recommended systems 31 5 User study 33 5.1 Cold Start 33 5.2 Participants 34 5.3 Experimental design 34 6 Result 38 6.1 Participant Questionnaires 38 6.1.1 Personalization and Recommendation Quality 38 6.1.2 Explainability and Transparency 39 6.1.3 Conviction, Trust, Intention to use and Satisfaction 40 6.2 System-related Data 41 6.2.1 User’s Like and Dislike in SAT condition 41 7 Discussion 43 7.1 The sole effect of XAI 43 7.2 The sole effect of HITL 44 7.3 The collaborative effect of XAI and HITL 45 SAT-1 Condition 46 SAT-2a Condition 46 SAT-2b Condition 47 Comprehensive Discussion 49 7.4 Other findings 50 7.4.1 Movies Already Watched and Participants’ Responses 50 7.4.2 The average time in different conditions 50 8 Conclusion 52 Reference 54 Appendix 61 Personalization (Ho & Bodoff, 2014) 61 Recommendation Quality (Nilashi et al., 2016) 61 Explainability (Nilashi et al., 2016) 61 Transparency (Nilashi et al., 2016) 62 Conviction (Pu et al., 2011) 62 Trust (Shin, 2021) 62 Intention to Use (Pu et al., 2011) 63 Satisfaction (Knijnenburg et al., 2012) 63zh_TW
dc.format.extent 3055768 bytes-
dc.format.mimetype application/pdf-
dc.source.uri (資料來源) http://thesis.lib.nccu.edu.tw/record/#G0111356046en_US
dc.subject (關鍵詞) 推薦系統zh_TW
dc.subject (關鍵詞) 人在迴圈zh_TW
dc.subject (關鍵詞) 可解釋人工智慧zh_TW
dc.subject (關鍵詞) Recommendation Systemen_US
dc.subject (關鍵詞) Human in the loopen_US
dc.subject (關鍵詞) Explainable AIen_US
dc.title (題名) 通過人在迴圈和可解釋性人工智慧提升推薦系統zh_TW
dc.title (題名) Advancing Recommendation Systems through Human-in-the-Loop and Explainable AIen_US
dc.type (資料類型) thesisen_US
dc.relation.reference (參考文獻) Amershi, S., Cakmak, M., Knox, W. B., & Kulesza, T. (2014). Power to the people: The role of humans in interactive machine learning. AI magazine, 35(4), 105–120. Bach, S., Binder, A., Montavon, G., Klauschen, F., Müller, K.-R., & Samek, W. (2015). On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PloS one, 10(7), e0130140. Bertrand, A., Belloum, R., Eagan, J. R., & Maxwell, W. (2022). How cognitive biases affect xai-assisted decision-making: A systematic review. Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society, 78–91. Chen, J. Y., Procci, K., Boyce, M., Wright, J., Garcia, A., & Barnes, M. (2014). Situation awareness-based agent transparency. US Army Research Laboratory, 1–29. Chien, S.-Y., Yang, C.-J., & Yu, F. (2022). Xflag: Explainable fake news detection model on social media. International Journal of Human–Computer Interaction, 38(18-20), 1808–1827. Cramer, H., Evers, V., Ramlal, S., Van Someren, M., Rutledge, L., Stash, N., Aroyo, L., & Wielinga, B. (2008). The effects of transparency on trust in and acceptance of a content-based art recommender. User Modeling and User-adapted interaction, 18, 455–496. Devlin, J., Chang, M.-W., Lee, K., & Toutanova, K. (2018). Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Endsley, M. R. (1995). Toward a theory of situation awareness in dynamic systems. Human factors, 37(1), 32–64. Flemisch, F., Heesen, M., Hesse, T., Kelsch, J., Schieben, A., & Beller, J. (2012). Towards a dynamic balance between humans and automation: Authority, ability, responsibility and control in shared and cooperative control situations. Cognition, Technology & Work, 14, 3–18. Gedikli, F., Jannach, D., & Ge, M. (2014). How should i explain? a comparison of different explanation types for recommender systems. International Journal of HumanComputer Studies, 72(4), 367–382. Gomez-Uribe, C. A., & Hunt, N. (2015). The netflix recommender system: Algorithms, business value, and innovation. ACM Transactions on Management Information Systems (TMIS), 6(4), 1–19. Grimmelikhuijsen, S. (2023). Explaining why the computer says no: Algorithmic transparency affects the perceived trustworthiness of automated decision-making. Public Administration Review, 83(2), 241–262. Gunning, D. (2017). Explainable artificial intelligence (xai). Defense advanced research projects agency (DARPA), nd Web, 2(2), 1. Hancock, B., Bordes, A., Mazare, P.-E., & Weston, J. (2019). Learning from dialogue after deployment: Feed yourself, chatbot! arXiv preprint arXiv:1901.05415. Harper, F. M., & Konstan, J. A. (2015). The movielens datasets: History and context. Acm transactions on interactive intelligent systems (tiis), 5(4), 1–19. Herlocker, J. L., Konstan, J. A., & Riedl, J. (2000). Explaining collaborative filtering recommendations. Proceedings of the 2000 ACM conference on Computer supported cooperative work, 241–250. Ho, S. Y., & Bodoff, D. (2014). The effects of web personalization on user attitude and behavior. MIS quarterly, 38(2), 497–A10. Kang, W.-C., & McAuley, J. (2018). Self-attentive sequential recommendation. 2018 IEEE international conference on data mining (ICDM), 197–206. Kizilcec, R. F. (2016). How much information? effects of transparency on trust in an algorithmic interface. Proceedings of the 2016 CHI conference on human factors in computing systems, 2390–2395. Knijnenburg, B. P., Willemsen, M. C., Gantner, Z., Soncu, H., & Newell, C. (2012). Explaining the user experience of recommender systems. User modeling and useradapted interaction, 22, 441–504. Kovashka, A., Parikh, D., & Grauman, K. (2015). Whittlesearch: Interactive image search with relative attribute feedback. International Journal of Computer Vision, 115, 185–210. Kulesza, T., Stumpf, S., Burnett, M., & Kwan, I. (2012). Tell me more? the effects of mental model soundness on personalizing an intelligent agent. Proceedings of the sigchi conference on human factors in computing systems, 1–10. Lee, J. D. (2012). Trust, trustworthiness, and trustability. Presentation at the Workshop on Human Machine Trust for Robust Autonomous Systems, 31. Li, J., Tu, Z., Yang, B., Lyu, M. R., & Zhang, T. (2018). Multi-head attention with disagreement regularization. arXiv preprint arXiv:1810.10183. Li, J., Ren, P., Chen, Z., Ren, Z., Lian, T., & Ma, J. (2017). Neural attentive session-based recommendation. Proceedings of the 2017 ACM on Conference on Information and Knowledge Management, 1419–1428. Li, J., Miller, A. H., Chopra, S., Ranzato, M., & Weston, J. (2016). Dialogue learning with human-in-the-loop. arXiv preprint arXiv:1611.09823. Liang, Krishnan, R. G., Hoffman, M. D., & Jebara, T. (2018). Variational autoencoders for collaborative filtering. Proceedings of the 2018 world wide web conference, 689–698. Liang, Lai, & Ku. (2006). Personalized content recommendation and user satisfaction: heoretical synthesis and empirical findings. Journal of Management Information Systems, 23(3), 45–70. Liu, Z., Guo, Y., & Mahmud, J. (2021). When and why does a model fail? a human-in-theloop error detection framework for sentiment analysis. arXiv preprint arXiv:2106.00954. Liu, Z., Wang, J., Gong, S., Lu, H., & Tao, D. (2019). Deep reinforcement active learning for human-in-the-loop person re-identification. Proceedings of the IEEE/CVF international conference on computer vision, 6122–6131. Lundberg, S. M., & Lee, S.-I. (2017). A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems, 4765–4774. https://arxiv.org/abs/1705.07874 Monarch, R. M. (2021). Human-in-the-loop machine learning: Active learning and annotation for human-centered ai. Simon; Schuster. Mosqueira-Rey, E., Hernández-Pereira, E., Alonso-Ríos, D., Bobes-Bascarán, J., & FernándezLeal, Á. (2023). Human-in-the-loop machine learning: A state of the art. Artificial Intelligence Review, 56(4), 3005–3054. Naiseh, M., Cemiloglu, D., Al Thani, D., Jiang, N., & Ali, R. (2021). Explainable recommendations and calibrated trust: Two systematic user errors. Computer, 54(10), 28–37. Nilashi, M., Jannach, D., bin Ibrahim, O., Esfahani, M. D., & Ahmadi, H. (2016). Recommendation quality, transparency, and website quality for trust-building in recommendation agents. Electronic Commerce Research and Applications, 19, 70– 84. Pu, P., Chen, L., & Hu, R. (2011). A user-centric evaluation framework for recommender systems. Proceedings of the fifth ACM conference on Recommender systems, 157– 164. Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al. (2018). Improving language understanding by generative pre-training. Rao, A. S., & Georgeff, M. P. (1995). Bdi agents: From theory to practice. Proceedings of the 1st International Conference on Multi-Agent Systems (ICMAS), 95, 312–319. Rendle, S., Freudenthaler, C., & Schmidt-Thieme, L. (2010). Factorizing personalized markov chains for next-basket recommendation. Proceedings of the 19th international conference on World wide web, 811–820. Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). ” why should i trust you?” explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, 1135–1144. Shin, D. (2021). The effects of explainability and causability on perception, trust, and acceptance: Implications for explainable ai.International Journal of Human-Computer Studies, 146, 102551. Simonson, I., & Tversky, A. (1992). Choice in context: Tradeoff contrast and extremeness aversion. Journal of marketing research, 29(3), 281–295. Springer, A., & Whittaker, S. (2019). Progressive disclosure: Empirically motivated approaches to designing effective ransparency. Proceedings of the 24th international conference on intelligent user interfaces, 107–120. Su, X., & Khoshgoftaar, T. M. (2009). A survey of collaborative filtering techniques. Advances in artificial intelligence, 2009. Sun, F., Liu, J., Wu, J., Pei, C., Lin, X., Ou, W., & Jiang, P. (2019). Bert4rec: Sequential recommendation with bidirectional encoder representations from transformer. Proceedings of the 28th ACM international conference on information and knowledge management, 1441–1450. Swartout, W., Paris, C., & Moore, J. (1991). Explanations in knowledge systems: Design for explainable expert systems. IEEE Expert, 6(3), 58–64. Tam, K. Y., & Ho, S. Y. (2006). Understanding the impact of web personalization on user information processing and decision outcomes. MIS quarterly, 865–890. Tang, J., & Wang, K. (2018). Personalized top-n sequential recommendation via convolutional sequence embedding. Proceedings of the eleventh ACM international conference on web search and data mining, 565–573. Tintarev, N., & Masthoff, J. (2012). Evaluating the effectiveness of explanations for recommender systems: Methodological issues and empirical studies on the impact of personalization. User Modeling and User-Adapted Interaction, 22, 399–439. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł., & Polosukhin, I. (2017). Attention is all you need. Advances in neural information processing systems, 30. Vig, J., Sen, S., & Riedl, J. (2009). Tagsplanations: Explaining recommendations using tags. Proceedings of the 14th international conference on Intelligent user interfaces, 47–56. Wright, J. L., Chen, J. Y., Barnes, M. J., & Boyce, M. W. (2015). The effects of information level on human-agent interaction for route planning. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 59(1), 811–815. Wu, X., Xiao, L., Sun, Y., Zhang, J., Ma, T., & He, L. (2022). A survey of human-inthe-loop for machine learning. Future Generation Computer Systems, 135, 364–381. Zhang, Y., & Chen, X. (2018). Explainable recommendation: A survey and new perspectives. corr abs/1804.11192 (2018). arXiv preprint arXiv:1804.11192.zh_TW