Publications-Theses
Article View/Open
Publication Export
-
Google ScholarTM
NCCU Library
Citation Infomation
Related Publications in TAIR
題名 深度學習為導向的可解釋性推薦系統-提升公部門的線上補助平台服務績效
Explainable Deep Learning - Based Recommendation Systems : Enhancing the Services of Public Sector Subsidy Online Platform作者 莊鈞諺
Zhuang, Jun-Yan貢獻者 胡毓忠
Hu, Yuh-Jong
莊鈞諺
Zhuang, Jun-Yan關鍵詞 可解釋性 AI
神經協同過濾
SHAP
個性化推薦系統
Explainable AI model
Neural Collaborative Filtering
SHAP
Personalized Recommendation System日期 2023 上傳時間 1-Sep-2023 15:39:58 (UTC+8) 摘要 本研究旨在解決政府補助平台的使用效益不彰問題,透過個性化推薦系統協助台灣中小企業進行數位化轉型。為達此目標,研究者設計與訓練了一個基於深度學習的協同過濾分析推薦系統,此系統能夠根據使用者和推薦項目的多種特徵進行學習,並為使用者提供最可能感興趣的前五名產品推薦。為強化模型的解釋性,研究者結合SHAP模組與深度學習模型,讓模型的預測結果更具透明度,並利用大型語言模型OpenAI的ChatGPT生成簡單易懂的語言解釋,協助使用者理解模型推薦的原因,進一步提升使用者的滿意度。實際上線運行後的數據顯示,此組合的推薦系統明顯提升了平台使用效率,相較於僅依賴隨機推薦。總體來看,本研究的成果表明,結合深度學習的推薦系統、解釋性AI模組,以及語言模型生成的方式,可以有效地提升中小企業在政府資源發放平台上選擇雲端解決方案的效率與決策準確度,從而助力台灣中小企業的數位化轉型。
This research addresses inefficiencies in government subsidy platforms and aids SME digital transformation using a personalized recommendation system. We developed a collaborative filtering recommendation system based on deep learning. Empirical data shows significant efficiency improvements when combined with SHAP, an explainable AI module versus a random version.The integration of SHAP enhances model interpretability, making predictions transparent. User-friendly language explanations, generated using a large language model (LLM), help users understand the system`s operation and boost satisfaction. The combined recommendation system with SHAP provides clear recommendation reasons, improving platform efficiency.In conclusion, the blend of a deep learning-based recommendation system, explainable AI, and language model generation effectively enhances decision-making accuracy for cloud solutions on resource distribution platforms, aiding Taiwan`s SMEs` digital transformation.參考文獻 [1] 資誠聯合會計師事務所 PwC Taiwan. 2021 臺灣中小企業轉型現況調查報告,2021. Accessed: 2023-06-10.[2] 經濟部中小企業處. Chapter 2.1. In 2022 年中小企業白皮書, pages 40–56. 2022.[3] Guy Shani and Asela Gunawardana. Evaluating recommendation systems.Recommender systems handbook, pages 257–297, 2011.[4] Alexandra Vultureanu-Albişi and Costin Bădică. Recommender systems: an ex-plainable ai perspective. In 2021 International Conference on INnovations inIntelligent SysTems and Applications (INISTA), pages 1–6. IEEE, 2021.[5] General data protection regulation (gdpr). Online document, 2016. Accessed: 2023-07-01.[6] Andrew I Schein, Alexandrin Popescul, Lyle H Ungar, and David M Pennock. Meth-ods and metrics for cold-start recommendations. In Proceedings of the 25th annualinternational ACM SIGIR conference on Research and development in informationretrieval, pages 253–260, 2002.[7] Jonathan L Herlocker, Joseph A Konstan, and John Riedl. Explaining collabora-tive filtering recommendations. In Proceedings of the 2000 ACM conference onComputer supported cooperative work, pages 241–250, 2000.[8] Zeynep Batmaz, Ali Yurekli, Alper Bilge, et al. A review on deep learning for recom-mender systems: challenges and remedies. Artificial Intelligence Review, 52:1–37,2019.45[9] Kunal Shah, Akshaykumar Salunke, Saurabh Dongare, et al. Recommender systems:An overview of different approaches to recommendations. In 2017 InternationalConference on Innovations in Information, Embedded and Communication Systems(ICIIECS), pages 1–4. IEEE, 2017.[10] Michael J Pazzani and Daniel Billsus. Content-based recommendation systems. Theadaptive web: methods and strategies of web personalization, pages 325–341, 2007.[11] Sarah Bouraga, Ivan Jureta, Stéphane Faulkner, et al. Knowledge-based recom-mendation systems: A survey. International Journal of Intelligent InformationTechnologies (IJIIT), 10(2):1–19, 2014.[12] Xiangnan He, Lizi Liao, Hanwang Zhang, et al. Neural collaborative filtering. InProceedings of the 26th international conference on world wide web, pages 173–182,2017.[13] Hiroki Morise, Kyohei Atarashi, Satoshi Oyama, et al. Neural collaborative filteringwith multicriteria evaluation data. Applied Soft Computing, 119:108548, 2022.[14] Farhan Ullah, Bofeng Zhang, and Others. Deep edu: a deep neural collaborativefiltering for educational services recommendation. IEEE access, 8:110915–110928,2020.[15] Shuai Yu, Min Yang, Qiang Qu, et al. Contextual-boosted deep neural collaborativefiltering model for interpretable recommendation. Expert systems with applications,136:365–375, 2019.[16] Hai Chen, Fulan Qian, Jie Chen, et al. Attribute-based neural collaborative filtering.Expert Systems with Applications, 185:115539, 2021.[17] Andreas Holzinger, Anna Saranti, Christoph Molnar, et al. Explainable ai methods-a brief overview. In International Workshop on Extending Explainable AI BeyondDeep Models and Classifiers, pages 13–38. Springer, 2020.46[18] Roberto Confalonieri, Ludovik Coba, Benedikt Wagner, et al. A historical perspec-tive of explainable artificial intelligence. Wiley Interdisciplinary Reviews: DataMining and Knowledge Discovery, 11(1):e1391, 2021.[19] Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. ” why should i trustyou?” explaining the predictions of any classifier. In Proceedings of the 22nd ACMSIGKDD international conference on knowledge discovery and data mining, pages1135–1144, 2016.[20] Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. Anchors: High-precisionmodel-agnostic explanations. In Proceedings of the AAAI conference on artificialintelligence, volume 32, 2018.[21] Mukund Sundararajan, Ankur Taly, and Qiqi Yan. Axiomatic attribution for deepnetworks. In International conference on machine learning, pages 3319–3328.PMLR, 2017.[22] Sebastian Bach, Alexander Binder, Grégoire Montavon, et al. On pixel-wise expla-nations for non-linear classifier decisions by layer-wise relevance propagation. PloSone, 10(7):e0130140, 2015.[23] Sujata Khedkar, Priyanka Gandhi, Gayatri Shinde, et al. Deep learning and explain-able ai in healthcare using ehr. Deep learning techniques for biomedical and healthinformatics, pages 129–148, 2020.[24] Shravan Sajja, Nupur Aggarwal, Sumanta Mukherjee, et al. Explainable ai basedinterventions for pre-season decision making in fashion retail. In Proceedings of the3rd ACM India Joint International Conference on Data Science & Management ofData (8th ACM IKDD CODS & 26th COMAD), pages 281–289, 2021.[25] Robert Zimmermann, Daniel Mora, Douglas Cirqueira, et al. Enhancing brick-and-mortar store shopping experience with an augmented reality shopping assistant ap-plication using personalized recommendations and explainable artificial intelligence.Journal of Research in Interactive Marketing, 17(2):273–298, 2023.47[26] Alex Gramegna and Paolo Giudici. Why to buy insurance? an explainable artificialintelligence approach. Risks, 8(4):137, 2020.[27] Kamil Matuszelański and Katarzyna Kopczewska. Customer churn in retail e-commerce business: Spatial and machine learning approach. Journal of Theoreticaland Applied Electronic Commerce Research, 17(1):165–198, 2022.[28] Gülşah Yılmaz Benk, Bertan Badur, and Sona Mardikyan. A new 360° frame-work to predict customer lifetime value for multi-category e-commerce companiesusing a multi-output deep neural network and explainable artificial intelligence.Information, 13(8):373, 2022.[29] Yongfeng Zhang, Xu Chen, et al. Explainable recommendation: A survey andnew perspectives. Foundations and Trends® in Information Retrieval, 14(1):1–101,2020.[30] Donghee Shin. The effects of explainability and causability on perception,trust, and acceptance: Implications for explainable ai. International Journal ofHuman-Computer Studies, 146:102551, 2021.[31] Mauro Dragoni, Ivan Donadello, and Claudio Eccher. Explainable ai meets per-suasiveness: Translating reasoning results into behavioral change advice. ArtificialIntelligence in Medicine, 105:101840, 2020.[32] Arun Rai. Explainable ai: From black box to glass box. Journal of the Academy ofMarketing Science, 48:137–141, 2020.[33] Arun Kumar Sangaiah, Samira Rezaei, et al. Explainable ai in big data intelli-gence of community detection for digitalization e-healthcare services. Applied SoftComputing, 136:110119, 2023.[34] Chun-Hua Tsai and Peter Brusilovsky. The effects of controllability and explainabil-ity in a social recommender system. User Modeling and User-Adapted Interaction,31:591–627, 2021.48[35] Harald Steck, Linas Baltrunas, Ehtsham Elahi, et al. Deep learning for recommendersystems: A netflix case study. AI Magazine, 42(3):7–18, 2021.[36] Scott M Lundberg and Su-In Lee. A unified approach to interpreting model predic-tions. Advances in neural information processing systems, 30, 2017.[37] Alexis Papadimitriou, Panagiotis Symeonidis, and Yannis Manolopoulos. A gener-alized taxonomy of explanations styles for traditional and social recommender sys-tems. Data Mining and Knowledge Discovery, 24:555–583, 2012.[38] Fulian Yin et al. An interpretable neural network tv program recommendation basedon shap. International Journal of Machine Learning and Cybernetics, pages 1–14,2023.[39] Mingang Chen and Pan Liu. Performance evaluation of recommender systems.International Journal of Performability Engineering, 13(8):1246, 2017.[40] Paolo Cremonesi, Yehuda Koren, and Roberto Turrin. Performance of recommenderalgorithms on top-n recommendation tasks. In Proceedings of the fourth ACMconference on Recommender systems, pages 39–46, 2010.[41] Ziqi Li. Extracting spatial effects from machine learning model using local inter-pretation method: An example of shap and xgboost. Computers, Environment andUrban Systems, 96:101845, 2022.[42] P Aditya and Mayukha Pal. Local interpretable model agnostic shap explanationsfor machine learning models. arXiv preprint arXiv:2210.04533, 2022. 描述 碩士
國立政治大學
資訊科學系碩士在職專班
110971009資料來源 http://thesis.lib.nccu.edu.tw/record/#G0110971009 資料類型 thesis dc.contributor.advisor 胡毓忠 zh_TW dc.contributor.advisor Hu, Yuh-Jong en_US dc.contributor.author (Authors) 莊鈞諺 zh_TW dc.contributor.author (Authors) Zhuang, Jun-Yan en_US dc.creator (作者) 莊鈞諺 zh_TW dc.creator (作者) Zhuang, Jun-Yan en_US dc.date (日期) 2023 en_US dc.date.accessioned 1-Sep-2023 15:39:58 (UTC+8) - dc.date.available 1-Sep-2023 15:39:58 (UTC+8) - dc.date.issued (上傳時間) 1-Sep-2023 15:39:58 (UTC+8) - dc.identifier (Other Identifiers) G0110971009 en_US dc.identifier.uri (URI) http://nccur.lib.nccu.edu.tw/handle/140.119/147097 - dc.description (描述) 碩士 zh_TW dc.description (描述) 國立政治大學 zh_TW dc.description (描述) 資訊科學系碩士在職專班 zh_TW dc.description (描述) 110971009 zh_TW dc.description.abstract (摘要) 本研究旨在解決政府補助平台的使用效益不彰問題,透過個性化推薦系統協助台灣中小企業進行數位化轉型。為達此目標,研究者設計與訓練了一個基於深度學習的協同過濾分析推薦系統,此系統能夠根據使用者和推薦項目的多種特徵進行學習,並為使用者提供最可能感興趣的前五名產品推薦。為強化模型的解釋性,研究者結合SHAP模組與深度學習模型,讓模型的預測結果更具透明度,並利用大型語言模型OpenAI的ChatGPT生成簡單易懂的語言解釋,協助使用者理解模型推薦的原因,進一步提升使用者的滿意度。實際上線運行後的數據顯示,此組合的推薦系統明顯提升了平台使用效率,相較於僅依賴隨機推薦。總體來看,本研究的成果表明,結合深度學習的推薦系統、解釋性AI模組,以及語言模型生成的方式,可以有效地提升中小企業在政府資源發放平台上選擇雲端解決方案的效率與決策準確度,從而助力台灣中小企業的數位化轉型。 zh_TW dc.description.abstract (摘要) This research addresses inefficiencies in government subsidy platforms and aids SME digital transformation using a personalized recommendation system. We developed a collaborative filtering recommendation system based on deep learning. Empirical data shows significant efficiency improvements when combined with SHAP, an explainable AI module versus a random version.The integration of SHAP enhances model interpretability, making predictions transparent. User-friendly language explanations, generated using a large language model (LLM), help users understand the system`s operation and boost satisfaction. The combined recommendation system with SHAP provides clear recommendation reasons, improving platform efficiency.In conclusion, the blend of a deep learning-based recommendation system, explainable AI, and language model generation effectively enhances decision-making accuracy for cloud solutions on resource distribution platforms, aiding Taiwan`s SMEs` digital transformation. en_US dc.description.tableofcontents 第一章 緒論 11.1 研究背景 11.1.1 中小企業數位轉型困難 11.2 研究動機與目的 21.2.1 企業對於使用何種產品數位化無概念 21.2.2 導入協同過濾推薦系統 21.2.3 為推薦系統加入可解釋性 21.3 研究架構 4第二章 文獻探討 52.1 推薦模型:從傳統方法到深度學習模型 52.1.1 傳統的推薦模型 52.1.2 基於深度學習的推薦系統 72.1.3 深度神經協同過濾 72.2 可解釋性 AI 的技術 92.2.1 相關技術回顧 92.2.2 應用該技術的案例 112.3 推薦系統與可解釋性 AI 的結合 122.3.1 透明化推薦系統的優勢 132.3.2 可解釋性 AI 在推薦系統的實證研究 132.4 文獻探討章節結論 15第三章 研究方法 163.1 研究步驟 163.2 資料準備 173.2.1 使用者資料處理 173.2.2 商品項目資料處理 203.3 推薦模型建置與評估 213.3.1 模型建置 213.3.2 基線比較模型建置 223.3.3 推薦模型輸出與推薦邏輯 233.4 可解釋性 AI 模組 243.4.1 可解釋模型 AI 模組 SHAP 243.4.2 SHAP 模組的執行過程 263.4.3 加入大型語言模型解釋 SHAP 模組輸出結果 263.5 研究模型上線效能評估 28第四章 研究結果 314.1 神經協同過濾推薦模型的建立 324.2 模型驗證與輸出 334.2.1 模型驗證與比較 334.2.2 模型輸出結果 344.3 可解釋性 AI 模組 354.3.1 全域與區域模型解釋 364.3.2 可解釋 AI SHAP 與大型語言模型結合 394.4 模型上線效能評估結果與分析 41第五章 結論與研究限制 435.1 結論 435.2 研究限制與未來展望 44參考文獻 45 zh_TW dc.format.extent 1733095 bytes - dc.format.mimetype application/pdf - dc.source.uri (資料來源) http://thesis.lib.nccu.edu.tw/record/#G0110971009 en_US dc.subject (關鍵詞) 可解釋性 AI zh_TW dc.subject (關鍵詞) 神經協同過濾 zh_TW dc.subject (關鍵詞) SHAP zh_TW dc.subject (關鍵詞) 個性化推薦系統 zh_TW dc.subject (關鍵詞) Explainable AI model en_US dc.subject (關鍵詞) Neural Collaborative Filtering en_US dc.subject (關鍵詞) SHAP en_US dc.subject (關鍵詞) Personalized Recommendation System en_US dc.title (題名) 深度學習為導向的可解釋性推薦系統-提升公部門的線上補助平台服務績效 zh_TW dc.title (題名) Explainable Deep Learning - Based Recommendation Systems : Enhancing the Services of Public Sector Subsidy Online Platform en_US dc.type (資料類型) thesis en_US dc.relation.reference (參考文獻) [1] 資誠聯合會計師事務所 PwC Taiwan. 2021 臺灣中小企業轉型現況調查報告,2021. Accessed: 2023-06-10.[2] 經濟部中小企業處. Chapter 2.1. In 2022 年中小企業白皮書, pages 40–56. 2022.[3] Guy Shani and Asela Gunawardana. Evaluating recommendation systems.Recommender systems handbook, pages 257–297, 2011.[4] Alexandra Vultureanu-Albişi and Costin Bădică. Recommender systems: an ex-plainable ai perspective. In 2021 International Conference on INnovations inIntelligent SysTems and Applications (INISTA), pages 1–6. IEEE, 2021.[5] General data protection regulation (gdpr). Online document, 2016. Accessed: 2023-07-01.[6] Andrew I Schein, Alexandrin Popescul, Lyle H Ungar, and David M Pennock. Meth-ods and metrics for cold-start recommendations. In Proceedings of the 25th annualinternational ACM SIGIR conference on Research and development in informationretrieval, pages 253–260, 2002.[7] Jonathan L Herlocker, Joseph A Konstan, and John Riedl. Explaining collabora-tive filtering recommendations. In Proceedings of the 2000 ACM conference onComputer supported cooperative work, pages 241–250, 2000.[8] Zeynep Batmaz, Ali Yurekli, Alper Bilge, et al. A review on deep learning for recom-mender systems: challenges and remedies. Artificial Intelligence Review, 52:1–37,2019.45[9] Kunal Shah, Akshaykumar Salunke, Saurabh Dongare, et al. Recommender systems:An overview of different approaches to recommendations. In 2017 InternationalConference on Innovations in Information, Embedded and Communication Systems(ICIIECS), pages 1–4. IEEE, 2017.[10] Michael J Pazzani and Daniel Billsus. Content-based recommendation systems. Theadaptive web: methods and strategies of web personalization, pages 325–341, 2007.[11] Sarah Bouraga, Ivan Jureta, Stéphane Faulkner, et al. Knowledge-based recom-mendation systems: A survey. International Journal of Intelligent InformationTechnologies (IJIIT), 10(2):1–19, 2014.[12] Xiangnan He, Lizi Liao, Hanwang Zhang, et al. Neural collaborative filtering. InProceedings of the 26th international conference on world wide web, pages 173–182,2017.[13] Hiroki Morise, Kyohei Atarashi, Satoshi Oyama, et al. Neural collaborative filteringwith multicriteria evaluation data. Applied Soft Computing, 119:108548, 2022.[14] Farhan Ullah, Bofeng Zhang, and Others. Deep edu: a deep neural collaborativefiltering for educational services recommendation. IEEE access, 8:110915–110928,2020.[15] Shuai Yu, Min Yang, Qiang Qu, et al. Contextual-boosted deep neural collaborativefiltering model for interpretable recommendation. Expert systems with applications,136:365–375, 2019.[16] Hai Chen, Fulan Qian, Jie Chen, et al. Attribute-based neural collaborative filtering.Expert Systems with Applications, 185:115539, 2021.[17] Andreas Holzinger, Anna Saranti, Christoph Molnar, et al. Explainable ai methods-a brief overview. In International Workshop on Extending Explainable AI BeyondDeep Models and Classifiers, pages 13–38. Springer, 2020.46[18] Roberto Confalonieri, Ludovik Coba, Benedikt Wagner, et al. A historical perspec-tive of explainable artificial intelligence. Wiley Interdisciplinary Reviews: DataMining and Knowledge Discovery, 11(1):e1391, 2021.[19] Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. ” why should i trustyou?” explaining the predictions of any classifier. In Proceedings of the 22nd ACMSIGKDD international conference on knowledge discovery and data mining, pages1135–1144, 2016.[20] Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. Anchors: High-precisionmodel-agnostic explanations. In Proceedings of the AAAI conference on artificialintelligence, volume 32, 2018.[21] Mukund Sundararajan, Ankur Taly, and Qiqi Yan. Axiomatic attribution for deepnetworks. In International conference on machine learning, pages 3319–3328.PMLR, 2017.[22] Sebastian Bach, Alexander Binder, Grégoire Montavon, et al. On pixel-wise expla-nations for non-linear classifier decisions by layer-wise relevance propagation. PloSone, 10(7):e0130140, 2015.[23] Sujata Khedkar, Priyanka Gandhi, Gayatri Shinde, et al. Deep learning and explain-able ai in healthcare using ehr. Deep learning techniques for biomedical and healthinformatics, pages 129–148, 2020.[24] Shravan Sajja, Nupur Aggarwal, Sumanta Mukherjee, et al. Explainable ai basedinterventions for pre-season decision making in fashion retail. In Proceedings of the3rd ACM India Joint International Conference on Data Science & Management ofData (8th ACM IKDD CODS & 26th COMAD), pages 281–289, 2021.[25] Robert Zimmermann, Daniel Mora, Douglas Cirqueira, et al. Enhancing brick-and-mortar store shopping experience with an augmented reality shopping assistant ap-plication using personalized recommendations and explainable artificial intelligence.Journal of Research in Interactive Marketing, 17(2):273–298, 2023.47[26] Alex Gramegna and Paolo Giudici. Why to buy insurance? an explainable artificialintelligence approach. Risks, 8(4):137, 2020.[27] Kamil Matuszelański and Katarzyna Kopczewska. Customer churn in retail e-commerce business: Spatial and machine learning approach. Journal of Theoreticaland Applied Electronic Commerce Research, 17(1):165–198, 2022.[28] Gülşah Yılmaz Benk, Bertan Badur, and Sona Mardikyan. A new 360° frame-work to predict customer lifetime value for multi-category e-commerce companiesusing a multi-output deep neural network and explainable artificial intelligence.Information, 13(8):373, 2022.[29] Yongfeng Zhang, Xu Chen, et al. Explainable recommendation: A survey andnew perspectives. Foundations and Trends® in Information Retrieval, 14(1):1–101,2020.[30] Donghee Shin. The effects of explainability and causability on perception,trust, and acceptance: Implications for explainable ai. International Journal ofHuman-Computer Studies, 146:102551, 2021.[31] Mauro Dragoni, Ivan Donadello, and Claudio Eccher. Explainable ai meets per-suasiveness: Translating reasoning results into behavioral change advice. ArtificialIntelligence in Medicine, 105:101840, 2020.[32] Arun Rai. Explainable ai: From black box to glass box. Journal of the Academy ofMarketing Science, 48:137–141, 2020.[33] Arun Kumar Sangaiah, Samira Rezaei, et al. Explainable ai in big data intelli-gence of community detection for digitalization e-healthcare services. Applied SoftComputing, 136:110119, 2023.[34] Chun-Hua Tsai and Peter Brusilovsky. The effects of controllability and explainabil-ity in a social recommender system. User Modeling and User-Adapted Interaction,31:591–627, 2021.48[35] Harald Steck, Linas Baltrunas, Ehtsham Elahi, et al. Deep learning for recommendersystems: A netflix case study. AI Magazine, 42(3):7–18, 2021.[36] Scott M Lundberg and Su-In Lee. A unified approach to interpreting model predic-tions. Advances in neural information processing systems, 30, 2017.[37] Alexis Papadimitriou, Panagiotis Symeonidis, and Yannis Manolopoulos. A gener-alized taxonomy of explanations styles for traditional and social recommender sys-tems. Data Mining and Knowledge Discovery, 24:555–583, 2012.[38] Fulian Yin et al. An interpretable neural network tv program recommendation basedon shap. International Journal of Machine Learning and Cybernetics, pages 1–14,2023.[39] Mingang Chen and Pan Liu. Performance evaluation of recommender systems.International Journal of Performability Engineering, 13(8):1246, 2017.[40] Paolo Cremonesi, Yehuda Koren, and Roberto Turrin. Performance of recommenderalgorithms on top-n recommendation tasks. In Proceedings of the fourth ACMconference on Recommender systems, pages 39–46, 2010.[41] Ziqi Li. Extracting spatial effects from machine learning model using local inter-pretation method: An example of shap and xgboost. Computers, Environment andUrban Systems, 96:101845, 2022.[42] P Aditya and Mayukha Pal. Local interpretable model agnostic shap explanationsfor machine learning models. arXiv preprint arXiv:2210.04533, 2022. zh_TW