Publications-Theses

Article View/Open

Publication Export

Google ScholarTM

NCCU Library

Citation Infomation

Related Publications in TAIR

題名 深度學習為導向的可解釋性推薦系統-提升公部門的線上補助平台服務績效
Explainable Deep Learning - Based Recommendation Systems : Enhancing the Services of Public Sector Subsidy Online Platform
作者 莊鈞諺
Zhuang, Jun-Yan
貢獻者 胡毓忠
Hu, Yuh-Jong
莊鈞諺
Zhuang, Jun-Yan
關鍵詞 可解釋性 AI
神經協同過濾
SHAP
個性化推薦系統
Explainable AI model
Neural Collaborative Filtering
SHAP
Personalized Recommendation System
日期 2023
上傳時間 1-Sep-2023 15:39:58 (UTC+8)
摘要 本研究旨在解決政府補助平台的使用效益不彰問題,透過個性化推薦系統協助台灣中小企業進行數位化轉型。為達此目標,研究者設計與訓練了一個基於深度學習的協同過濾分析推薦系統,此系統能夠根據使用者和推薦項目的多種特徵進行學習,並為使用者提供最可能感興趣的前五名產品推薦。
為強化模型的解釋性,研究者結合SHAP模組與深度學習模型,讓模型的預測結果更具透明度,並利用大型語言模型OpenAI的ChatGPT生成簡單易懂的語言解釋,協助使用者理解模型推薦的原因,進一步提升使用者的滿意度。
實際上線運行後的數據顯示,此組合的推薦系統明顯提升了平台使用效率,相較於僅依賴隨機推薦。總體來看,本研究的成果表明,結合深度學習的推薦系統、解釋性AI模組,以及語言模型生成的方式,可以有效地提升中小企業在政府資源發放平台上選擇雲端解決方案的效率與決策準確度,從而助力台灣中小企業的數位化轉型。
This research addresses inefficiencies in government subsidy platforms and aids SME digital transformation using a personalized recommendation system. We developed a collaborative filtering recommendation system based on deep learning. Empirical data shows significant efficiency improvements when combined with SHAP, an explainable AI module versus a random version.

The integration of SHAP enhances model interpretability, making predictions transparent. User-friendly language explanations, generated using a large language model (LLM), help users understand the system`s operation and boost satisfaction. The combined recommendation system with SHAP provides clear recommendation reasons, improving platform efficiency.
In conclusion, the blend of a deep learning-based recommendation system, explainable AI, and language model generation effectively enhances decision-making accuracy for cloud solutions on resource distribution platforms, aiding Taiwan`s SMEs` digital transformation.
參考文獻 [1] 資誠聯合會計師事務所 PwC Taiwan. 2021 臺灣中小企業轉型現況調查報告,
2021. Accessed: 2023-06-10.
[2] 經濟部中小企業處. Chapter 2.1. In 2022 年中小企業白皮書, pages 40–56. 2022.
[3] Guy Shani and Asela Gunawardana. Evaluating recommendation systems.
Recommender systems handbook, pages 257–297, 2011.
[4] Alexandra Vultureanu-Albişi and Costin Bădică. Recommender systems: an ex-
plainable ai perspective. In 2021 International Conference on INnovations in
Intelligent SysTems and Applications (INISTA), pages 1–6. IEEE, 2021.
[5] General data protection regulation (gdpr). Online document, 2016. Accessed: 2023-
07-01.
[6] Andrew I Schein, Alexandrin Popescul, Lyle H Ungar, and David M Pennock. Meth-
ods and metrics for cold-start recommendations. In Proceedings of the 25th annual
international ACM SIGIR conference on Research and development in information
retrieval, pages 253–260, 2002.
[7] Jonathan L Herlocker, Joseph A Konstan, and John Riedl. Explaining collabora-
tive filtering recommendations. In Proceedings of the 2000 ACM conference on
Computer supported cooperative work, pages 241–250, 2000.
[8] Zeynep Batmaz, Ali Yurekli, Alper Bilge, et al. A review on deep learning for recom-
mender systems: challenges and remedies. Artificial Intelligence Review, 52:1–37,
2019.
45
[9] Kunal Shah, Akshaykumar Salunke, Saurabh Dongare, et al. Recommender systems:
An overview of different approaches to recommendations. In 2017 International
Conference on Innovations in Information, Embedded and Communication Systems
(ICIIECS), pages 1–4. IEEE, 2017.
[10] Michael J Pazzani and Daniel Billsus. Content-based recommendation systems. The
adaptive web: methods and strategies of web personalization, pages 325–341, 2007.
[11] Sarah Bouraga, Ivan Jureta, Stéphane Faulkner, et al. Knowledge-based recom-
mendation systems: A survey. International Journal of Intelligent Information
Technologies (IJIIT), 10(2):1–19, 2014.
[12] Xiangnan He, Lizi Liao, Hanwang Zhang, et al. Neural collaborative filtering. In
Proceedings of the 26th international conference on world wide web, pages 173–182,
2017.
[13] Hiroki Morise, Kyohei Atarashi, Satoshi Oyama, et al. Neural collaborative filtering
with multicriteria evaluation data. Applied Soft Computing, 119:108548, 2022.
[14] Farhan Ullah, Bofeng Zhang, and Others. Deep edu: a deep neural collaborative
filtering for educational services recommendation. IEEE access, 8:110915–110928,
2020.
[15] Shuai Yu, Min Yang, Qiang Qu, et al. Contextual-boosted deep neural collaborative
filtering model for interpretable recommendation. Expert systems with applications,
136:365–375, 2019.
[16] Hai Chen, Fulan Qian, Jie Chen, et al. Attribute-based neural collaborative filtering.
Expert Systems with Applications, 185:115539, 2021.
[17] Andreas Holzinger, Anna Saranti, Christoph Molnar, et al. Explainable ai methods-
a brief overview. In International Workshop on Extending Explainable AI Beyond
Deep Models and Classifiers, pages 13–38. Springer, 2020.
46
[18] Roberto Confalonieri, Ludovik Coba, Benedikt Wagner, et al. A historical perspec-
tive of explainable artificial intelligence. Wiley Interdisciplinary Reviews: Data
Mining and Knowledge Discovery, 11(1):e1391, 2021.
[19] Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. ” why should i trust
you?” explaining the predictions of any classifier. In Proceedings of the 22nd ACM
SIGKDD international conference on knowledge discovery and data mining, pages
1135–1144, 2016.
[20] Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. Anchors: High-precision
model-agnostic explanations. In Proceedings of the AAAI conference on artificial
intelligence, volume 32, 2018.
[21] Mukund Sundararajan, Ankur Taly, and Qiqi Yan. Axiomatic attribution for deep
networks. In International conference on machine learning, pages 3319–3328.
PMLR, 2017.
[22] Sebastian Bach, Alexander Binder, Grégoire Montavon, et al. On pixel-wise expla-
nations for non-linear classifier decisions by layer-wise relevance propagation. PloS
one, 10(7):e0130140, 2015.
[23] Sujata Khedkar, Priyanka Gandhi, Gayatri Shinde, et al. Deep learning and explain-
able ai in healthcare using ehr. Deep learning techniques for biomedical and health
informatics, pages 129–148, 2020.
[24] Shravan Sajja, Nupur Aggarwal, Sumanta Mukherjee, et al. Explainable ai based
interventions for pre-season decision making in fashion retail. In Proceedings of the
3rd ACM India Joint International Conference on Data Science & Management of
Data (8th ACM IKDD CODS & 26th COMAD), pages 281–289, 2021.
[25] Robert Zimmermann, Daniel Mora, Douglas Cirqueira, et al. Enhancing brick-and-
mortar store shopping experience with an augmented reality shopping assistant ap-
plication using personalized recommendations and explainable artificial intelligence.
Journal of Research in Interactive Marketing, 17(2):273–298, 2023.
47
[26] Alex Gramegna and Paolo Giudici. Why to buy insurance? an explainable artificial
intelligence approach. Risks, 8(4):137, 2020.
[27] Kamil Matuszelański and Katarzyna Kopczewska. Customer churn in retail e-
commerce business: Spatial and machine learning approach. Journal of Theoretical
and Applied Electronic Commerce Research, 17(1):165–198, 2022.
[28] Gülşah Yılmaz Benk, Bertan Badur, and Sona Mardikyan. A new 360° frame-
work to predict customer lifetime value for multi-category e-commerce companies
using a multi-output deep neural network and explainable artificial intelligence.
Information, 13(8):373, 2022.
[29] Yongfeng Zhang, Xu Chen, et al. Explainable recommendation: A survey and
new perspectives. Foundations and Trends® in Information Retrieval, 14(1):1–101,
2020.
[30] Donghee Shin. The effects of explainability and causability on perception,
trust, and acceptance: Implications for explainable ai. International Journal of
Human-Computer Studies, 146:102551, 2021.
[31] Mauro Dragoni, Ivan Donadello, and Claudio Eccher. Explainable ai meets per-
suasiveness: Translating reasoning results into behavioral change advice. Artificial
Intelligence in Medicine, 105:101840, 2020.
[32] Arun Rai. Explainable ai: From black box to glass box. Journal of the Academy of
Marketing Science, 48:137–141, 2020.
[33] Arun Kumar Sangaiah, Samira Rezaei, et al. Explainable ai in big data intelli-
gence of community detection for digitalization e-healthcare services. Applied Soft
Computing, 136:110119, 2023.
[34] Chun-Hua Tsai and Peter Brusilovsky. The effects of controllability and explainabil-
ity in a social recommender system. User Modeling and User-Adapted Interaction,
31:591–627, 2021.
48
[35] Harald Steck, Linas Baltrunas, Ehtsham Elahi, et al. Deep learning for recommender
systems: A netflix case study. AI Magazine, 42(3):7–18, 2021.
[36] Scott M Lundberg and Su-In Lee. A unified approach to interpreting model predic-
tions. Advances in neural information processing systems, 30, 2017.
[37] Alexis Papadimitriou, Panagiotis Symeonidis, and Yannis Manolopoulos. A gener-
alized taxonomy of explanations styles for traditional and social recommender sys-
tems. Data Mining and Knowledge Discovery, 24:555–583, 2012.
[38] Fulian Yin et al. An interpretable neural network tv program recommendation based
on shap. International Journal of Machine Learning and Cybernetics, pages 1–14,
2023.
[39] Mingang Chen and Pan Liu. Performance evaluation of recommender systems.
International Journal of Performability Engineering, 13(8):1246, 2017.
[40] Paolo Cremonesi, Yehuda Koren, and Roberto Turrin. Performance of recommender
algorithms on top-n recommendation tasks. In Proceedings of the fourth ACM
conference on Recommender systems, pages 39–46, 2010.
[41] Ziqi Li. Extracting spatial effects from machine learning model using local inter-
pretation method: An example of shap and xgboost. Computers, Environment and
Urban Systems, 96:101845, 2022.
[42] P Aditya and Mayukha Pal. Local interpretable model agnostic shap explanations
for machine learning models. arXiv preprint arXiv:2210.04533, 2022.
描述 碩士
國立政治大學
資訊科學系碩士在職專班
110971009
資料來源 http://thesis.lib.nccu.edu.tw/record/#G0110971009
資料類型 thesis
dc.contributor.advisor 胡毓忠zh_TW
dc.contributor.advisor Hu, Yuh-Jongen_US
dc.contributor.author (Authors) 莊鈞諺zh_TW
dc.contributor.author (Authors) Zhuang, Jun-Yanen_US
dc.creator (作者) 莊鈞諺zh_TW
dc.creator (作者) Zhuang, Jun-Yanen_US
dc.date (日期) 2023en_US
dc.date.accessioned 1-Sep-2023 15:39:58 (UTC+8)-
dc.date.available 1-Sep-2023 15:39:58 (UTC+8)-
dc.date.issued (上傳時間) 1-Sep-2023 15:39:58 (UTC+8)-
dc.identifier (Other Identifiers) G0110971009en_US
dc.identifier.uri (URI) http://nccur.lib.nccu.edu.tw/handle/140.119/147097-
dc.description (描述) 碩士zh_TW
dc.description (描述) 國立政治大學zh_TW
dc.description (描述) 資訊科學系碩士在職專班zh_TW
dc.description (描述) 110971009zh_TW
dc.description.abstract (摘要) 本研究旨在解決政府補助平台的使用效益不彰問題,透過個性化推薦系統協助台灣中小企業進行數位化轉型。為達此目標,研究者設計與訓練了一個基於深度學習的協同過濾分析推薦系統,此系統能夠根據使用者和推薦項目的多種特徵進行學習,並為使用者提供最可能感興趣的前五名產品推薦。
為強化模型的解釋性,研究者結合SHAP模組與深度學習模型,讓模型的預測結果更具透明度,並利用大型語言模型OpenAI的ChatGPT生成簡單易懂的語言解釋,協助使用者理解模型推薦的原因,進一步提升使用者的滿意度。
實際上線運行後的數據顯示,此組合的推薦系統明顯提升了平台使用效率,相較於僅依賴隨機推薦。總體來看,本研究的成果表明,結合深度學習的推薦系統、解釋性AI模組,以及語言模型生成的方式,可以有效地提升中小企業在政府資源發放平台上選擇雲端解決方案的效率與決策準確度,從而助力台灣中小企業的數位化轉型。
zh_TW
dc.description.abstract (摘要) This research addresses inefficiencies in government subsidy platforms and aids SME digital transformation using a personalized recommendation system. We developed a collaborative filtering recommendation system based on deep learning. Empirical data shows significant efficiency improvements when combined with SHAP, an explainable AI module versus a random version.

The integration of SHAP enhances model interpretability, making predictions transparent. User-friendly language explanations, generated using a large language model (LLM), help users understand the system`s operation and boost satisfaction. The combined recommendation system with SHAP provides clear recommendation reasons, improving platform efficiency.
In conclusion, the blend of a deep learning-based recommendation system, explainable AI, and language model generation effectively enhances decision-making accuracy for cloud solutions on resource distribution platforms, aiding Taiwan`s SMEs` digital transformation.
en_US
dc.description.tableofcontents 第一章 緒論 1
1.1 研究背景 1
1.1.1 中小企業數位轉型困難 1
1.2 研究動機與目的 2
1.2.1 企業對於使用何種產品數位化無概念 2
1.2.2 導入協同過濾推薦系統 2
1.2.3 為推薦系統加入可解釋性 2
1.3 研究架構 4
第二章 文獻探討 5
2.1 推薦模型:從傳統方法到深度學習模型 5
2.1.1 傳統的推薦模型 5
2.1.2 基於深度學習的推薦系統 7
2.1.3 深度神經協同過濾 7
2.2 可解釋性 AI 的技術 9
2.2.1 相關技術回顧 9
2.2.2 應用該技術的案例 11
2.3 推薦系統與可解釋性 AI 的結合 12
2.3.1 透明化推薦系統的優勢 13
2.3.2 可解釋性 AI 在推薦系統的實證研究 13
2.4 文獻探討章節結論 15
第三章 研究方法 16
3.1 研究步驟 16
3.2 資料準備 17
3.2.1 使用者資料處理 17
3.2.2 商品項目資料處理 20
3.3 推薦模型建置與評估 21
3.3.1 模型建置 21
3.3.2 基線比較模型建置 22
3.3.3 推薦模型輸出與推薦邏輯 23
3.4 可解釋性 AI 模組 24
3.4.1 可解釋模型 AI 模組 SHAP 24
3.4.2 SHAP 模組的執行過程 26
3.4.3 加入大型語言模型解釋 SHAP 模組輸出結果 26
3.5 研究模型上線效能評估 28
第四章 研究結果 31
4.1 神經協同過濾推薦模型的建立 32
4.2 模型驗證與輸出 33
4.2.1 模型驗證與比較 33
4.2.2 模型輸出結果 34
4.3 可解釋性 AI 模組 35
4.3.1 全域與區域模型解釋 36
4.3.2 可解釋 AI SHAP 與大型語言模型結合 39
4.4 模型上線效能評估結果與分析 41
第五章 結論與研究限制 43
5.1 結論 43
5.2 研究限制與未來展望 44
參考文獻 45
zh_TW
dc.format.extent 1733095 bytes-
dc.format.mimetype application/pdf-
dc.source.uri (資料來源) http://thesis.lib.nccu.edu.tw/record/#G0110971009en_US
dc.subject (關鍵詞) 可解釋性 AIzh_TW
dc.subject (關鍵詞) 神經協同過濾zh_TW
dc.subject (關鍵詞) SHAPzh_TW
dc.subject (關鍵詞) 個性化推薦系統zh_TW
dc.subject (關鍵詞) Explainable AI modelen_US
dc.subject (關鍵詞) Neural Collaborative Filteringen_US
dc.subject (關鍵詞) SHAPen_US
dc.subject (關鍵詞) Personalized Recommendation Systemen_US
dc.title (題名) 深度學習為導向的可解釋性推薦系統-提升公部門的線上補助平台服務績效zh_TW
dc.title (題名) Explainable Deep Learning - Based Recommendation Systems : Enhancing the Services of Public Sector Subsidy Online Platformen_US
dc.type (資料類型) thesisen_US
dc.relation.reference (參考文獻) [1] 資誠聯合會計師事務所 PwC Taiwan. 2021 臺灣中小企業轉型現況調查報告,
2021. Accessed: 2023-06-10.
[2] 經濟部中小企業處. Chapter 2.1. In 2022 年中小企業白皮書, pages 40–56. 2022.
[3] Guy Shani and Asela Gunawardana. Evaluating recommendation systems.
Recommender systems handbook, pages 257–297, 2011.
[4] Alexandra Vultureanu-Albişi and Costin Bădică. Recommender systems: an ex-
plainable ai perspective. In 2021 International Conference on INnovations in
Intelligent SysTems and Applications (INISTA), pages 1–6. IEEE, 2021.
[5] General data protection regulation (gdpr). Online document, 2016. Accessed: 2023-
07-01.
[6] Andrew I Schein, Alexandrin Popescul, Lyle H Ungar, and David M Pennock. Meth-
ods and metrics for cold-start recommendations. In Proceedings of the 25th annual
international ACM SIGIR conference on Research and development in information
retrieval, pages 253–260, 2002.
[7] Jonathan L Herlocker, Joseph A Konstan, and John Riedl. Explaining collabora-
tive filtering recommendations. In Proceedings of the 2000 ACM conference on
Computer supported cooperative work, pages 241–250, 2000.
[8] Zeynep Batmaz, Ali Yurekli, Alper Bilge, et al. A review on deep learning for recom-
mender systems: challenges and remedies. Artificial Intelligence Review, 52:1–37,
2019.
45
[9] Kunal Shah, Akshaykumar Salunke, Saurabh Dongare, et al. Recommender systems:
An overview of different approaches to recommendations. In 2017 International
Conference on Innovations in Information, Embedded and Communication Systems
(ICIIECS), pages 1–4. IEEE, 2017.
[10] Michael J Pazzani and Daniel Billsus. Content-based recommendation systems. The
adaptive web: methods and strategies of web personalization, pages 325–341, 2007.
[11] Sarah Bouraga, Ivan Jureta, Stéphane Faulkner, et al. Knowledge-based recom-
mendation systems: A survey. International Journal of Intelligent Information
Technologies (IJIIT), 10(2):1–19, 2014.
[12] Xiangnan He, Lizi Liao, Hanwang Zhang, et al. Neural collaborative filtering. In
Proceedings of the 26th international conference on world wide web, pages 173–182,
2017.
[13] Hiroki Morise, Kyohei Atarashi, Satoshi Oyama, et al. Neural collaborative filtering
with multicriteria evaluation data. Applied Soft Computing, 119:108548, 2022.
[14] Farhan Ullah, Bofeng Zhang, and Others. Deep edu: a deep neural collaborative
filtering for educational services recommendation. IEEE access, 8:110915–110928,
2020.
[15] Shuai Yu, Min Yang, Qiang Qu, et al. Contextual-boosted deep neural collaborative
filtering model for interpretable recommendation. Expert systems with applications,
136:365–375, 2019.
[16] Hai Chen, Fulan Qian, Jie Chen, et al. Attribute-based neural collaborative filtering.
Expert Systems with Applications, 185:115539, 2021.
[17] Andreas Holzinger, Anna Saranti, Christoph Molnar, et al. Explainable ai methods-
a brief overview. In International Workshop on Extending Explainable AI Beyond
Deep Models and Classifiers, pages 13–38. Springer, 2020.
46
[18] Roberto Confalonieri, Ludovik Coba, Benedikt Wagner, et al. A historical perspec-
tive of explainable artificial intelligence. Wiley Interdisciplinary Reviews: Data
Mining and Knowledge Discovery, 11(1):e1391, 2021.
[19] Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. ” why should i trust
you?” explaining the predictions of any classifier. In Proceedings of the 22nd ACM
SIGKDD international conference on knowledge discovery and data mining, pages
1135–1144, 2016.
[20] Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. Anchors: High-precision
model-agnostic explanations. In Proceedings of the AAAI conference on artificial
intelligence, volume 32, 2018.
[21] Mukund Sundararajan, Ankur Taly, and Qiqi Yan. Axiomatic attribution for deep
networks. In International conference on machine learning, pages 3319–3328.
PMLR, 2017.
[22] Sebastian Bach, Alexander Binder, Grégoire Montavon, et al. On pixel-wise expla-
nations for non-linear classifier decisions by layer-wise relevance propagation. PloS
one, 10(7):e0130140, 2015.
[23] Sujata Khedkar, Priyanka Gandhi, Gayatri Shinde, et al. Deep learning and explain-
able ai in healthcare using ehr. Deep learning techniques for biomedical and health
informatics, pages 129–148, 2020.
[24] Shravan Sajja, Nupur Aggarwal, Sumanta Mukherjee, et al. Explainable ai based
interventions for pre-season decision making in fashion retail. In Proceedings of the
3rd ACM India Joint International Conference on Data Science & Management of
Data (8th ACM IKDD CODS & 26th COMAD), pages 281–289, 2021.
[25] Robert Zimmermann, Daniel Mora, Douglas Cirqueira, et al. Enhancing brick-and-
mortar store shopping experience with an augmented reality shopping assistant ap-
plication using personalized recommendations and explainable artificial intelligence.
Journal of Research in Interactive Marketing, 17(2):273–298, 2023.
47
[26] Alex Gramegna and Paolo Giudici. Why to buy insurance? an explainable artificial
intelligence approach. Risks, 8(4):137, 2020.
[27] Kamil Matuszelański and Katarzyna Kopczewska. Customer churn in retail e-
commerce business: Spatial and machine learning approach. Journal of Theoretical
and Applied Electronic Commerce Research, 17(1):165–198, 2022.
[28] Gülşah Yılmaz Benk, Bertan Badur, and Sona Mardikyan. A new 360° frame-
work to predict customer lifetime value for multi-category e-commerce companies
using a multi-output deep neural network and explainable artificial intelligence.
Information, 13(8):373, 2022.
[29] Yongfeng Zhang, Xu Chen, et al. Explainable recommendation: A survey and
new perspectives. Foundations and Trends® in Information Retrieval, 14(1):1–101,
2020.
[30] Donghee Shin. The effects of explainability and causability on perception,
trust, and acceptance: Implications for explainable ai. International Journal of
Human-Computer Studies, 146:102551, 2021.
[31] Mauro Dragoni, Ivan Donadello, and Claudio Eccher. Explainable ai meets per-
suasiveness: Translating reasoning results into behavioral change advice. Artificial
Intelligence in Medicine, 105:101840, 2020.
[32] Arun Rai. Explainable ai: From black box to glass box. Journal of the Academy of
Marketing Science, 48:137–141, 2020.
[33] Arun Kumar Sangaiah, Samira Rezaei, et al. Explainable ai in big data intelli-
gence of community detection for digitalization e-healthcare services. Applied Soft
Computing, 136:110119, 2023.
[34] Chun-Hua Tsai and Peter Brusilovsky. The effects of controllability and explainabil-
ity in a social recommender system. User Modeling and User-Adapted Interaction,
31:591–627, 2021.
48
[35] Harald Steck, Linas Baltrunas, Ehtsham Elahi, et al. Deep learning for recommender
systems: A netflix case study. AI Magazine, 42(3):7–18, 2021.
[36] Scott M Lundberg and Su-In Lee. A unified approach to interpreting model predic-
tions. Advances in neural information processing systems, 30, 2017.
[37] Alexis Papadimitriou, Panagiotis Symeonidis, and Yannis Manolopoulos. A gener-
alized taxonomy of explanations styles for traditional and social recommender sys-
tems. Data Mining and Knowledge Discovery, 24:555–583, 2012.
[38] Fulian Yin et al. An interpretable neural network tv program recommendation based
on shap. International Journal of Machine Learning and Cybernetics, pages 1–14,
2023.
[39] Mingang Chen and Pan Liu. Performance evaluation of recommender systems.
International Journal of Performability Engineering, 13(8):1246, 2017.
[40] Paolo Cremonesi, Yehuda Koren, and Roberto Turrin. Performance of recommender
algorithms on top-n recommendation tasks. In Proceedings of the fourth ACM
conference on Recommender systems, pages 39–46, 2010.
[41] Ziqi Li. Extracting spatial effects from machine learning model using local inter-
pretation method: An example of shap and xgboost. Computers, Environment and
Urban Systems, 96:101845, 2022.
[42] P Aditya and Mayukha Pal. Local interpretable model agnostic shap explanations
for machine learning models. arXiv preprint arXiv:2210.04533, 2022.
zh_TW