Publications-Theses

Article View/Open

Publication Export

Google ScholarTM

NCCU Library

Citation Infomation

Related Publications in TAIR

題名 MLOps的視覺化負責任人工智慧 – Price Prediction as a Service的應用
Implementing Responsible AI in Price Prediction as a Service through Visualization Arrangements in MLOps
作者 江應翎
Chiang, Ying-Ling
貢獻者 蔡瑞煌<br>洪智鐸
Tsaih, Rua-Huan<br>Hong, Chih-Duo
江應翎
Chiang, Ying-Ling
關鍵詞 MLOps
負責任人工智慧
視覺化
MLOps
Responsible artificial intelligence(RAI)
visualization
日期 2025
上傳時間 1-Sep-2025 15:04:54 (UTC+8)
摘要 本論文探討如何透過視覺化在MLOps中實施負責任人工智慧(RAI)原則,以提升金融AI部署的公平性、解釋性、問責性和可靠性。本研究針對台灣金融業現況制定四項視覺化指導原則—資訊追溯性、邏輯解釋性、決策參與性和風險預警性。以四項視覺化指導原則為基礎,實作示範MLOps各階段提出五項RAI視覺化工具:用於稽核軌跡的追溯儀表板、數據準備階段偏差檢測的資料品質指標、模型驗證階段提升可解釋性的特徵重要性和模型解釋工具,以及部署決策的模型比較介面。這些工具在股價預測服務(PPaaS)系統中實作並應用於股價預測。研究採用標準與增加了RAI視覺化工具的使用者介面的比較評估。來自銀行、證券和保險業的金融專業人士參與研究,針對各工具支援特定RAI原則的程度提供李克特量表(Likert scale)評分和質性回饋。結果顯示量化評分有顯著改善。質性回饋證實RAI增強介面解決了標準平台在資訊追溯性、邏輯解釋性、決策參與性和風險預警性方面的關鍵缺口。本研究驗證了針對性視覺化介入能成功將抽象RAI原則操作化為實用工具,提升非AI專業使用者對AI驅動金融應用的理解、信任和決策品質,為透過視覺化設計實施負責任AI提供系統性指導原則。
This thesis explores how to implement Responsible Artificial Intelligence (RAI) principles in MLOps through visualization to enhance fairness, explainability, accountability, and reliability of financial AI applications, further assisting financial professionals without AI-related knowledge to comply with responsible AI principles when using MLOps. This study develops four visualization design guidelines tailored to Taiwan's financial industry context—Information Traceability, Logical Explainability, Decision Participation, and Risk Anticipation. Based on these four visualization guidelines, five RAI visualization tools are implemented across various MLOps modules: Traceability Dashboard, Data Quality Indicator, Feature Importance and Model Explanation, and Model Comparison. These tools are implemented and applied to stock price prediction in the Price Prediction as a Service (PPaaS) system. The research employs a comparative evaluation between standard user interfaces and those enhanced with RAI visualization tools. Financial professionals from banking, securities, and insurance industries participated in the study, providing Likert scale ratings and qualitative feedback on the degree to which each tool supports specific RAI principles. Results show significant improvements in quantitative scores. Qualitative feedback confirms that RAI-enhanced interfaces address critical gaps in the standard platform regarding information traceability, logical explainability, decision participation, and risk anticipation. This study validates that targeted visualization interventions can successfully operationalize abstract RAI principles into practical tools, enhancing non-technical users' understanding, trust, and decision-making quality in AI-driven financial applications, providing systematic guidelines for implementing responsible AI through visualization design.
參考文獻 [1] Alicioglu, G., & Sun, B. (2022). A survey of visual analytics for explainable artificial intelligence methods. Computers & Graphics, 102, 502-520. [2] Besinger, P., Vejnoska, D., & Ansari, F. (2024). Responsible AI (RAI) in manufacturing: A qualitative framework. Procedia Computer Science, 232, 813-822. [3] Davis, F. D. (1989). Technology acceptance model: TAM. Al-Suqri, MN, Al-Aufi, AS: Information Seeking Behavior and Technology Adoption, 205(219), 5. [4] Dignum, V. (2019). Responsible artificial intelligence: how to develop and use AI in a responsible way (Vol. 2156). Cham: Springer. [5] Financial Supervisory Commission R.O.C. (Taiwan). (2024). Guidelines on the use of artificial intelligence in the financial industry. [6] F⊘ lstad, A. (2007). Work-domain experts as evaluators: usability inspection of domain-specific work-support systems. International Journal of Human-Computer Interaction, 22(3), 217-245. [7] Jain, A., Patel, H., Nagalapatti, L., Gupta, N., Mehta, S., Guttula, S., ... & Munigala, V. (2020, August). Overview and importance of data quality for machine learning tasks. In Proceedings of the 26th ACM SIGKDD international conference on knowledge discovery & data mining (pp. 3561-3562). [8] Matsui, B. M., & Goya, D. H. (2022, May). MLOps: A Guide to its Adoption in the Context of Responsible AI. In Proceedings of the 1st Workshop on Software Engineering for Responsible AI (pp. 45-49). [9] Molnar, C., Freiesleben, T., König, G., Herbinger, J., Reisinger, T., Casalicchio, G., ... & Bischl, B. (2023, July). Relating the partial dependence plot and permutation feature importance to the data generating process. In World Conference on Explainable Artificial Intelligence (pp. 456-479). Cham: Springer Nature Switzerland. [10] Mujkanovic, F., Doskoč, V., Schirneck, M., Schäfer, P., & Friedrich, T. (2020). timeXplain--A Framework for Explaining the Predictions of Time Series Classifiers. arXiv preprint arXiv:2007.07606. [11] Pathak, S. (2022). Explainable AI for ML Ops. In World of Business with Data and Analytics (pp. 187-201). Singapore: Springer Nature Singapore. [12] Salama, K., Kazmierczak, J., & Schut, D. (2021, May). Practitioners Guide to MLOps: A framework for continuous delivery and automation of machine learning. Google Cloud. [13] Schlegel, U., & Keim, D. A. (2021, October). Time series model attribution visualizations as explanations. In 2021 IEEE Workshop on TRust and EXpertise in Visual Analytics (TREX) (pp. 27-31). IEEE. [14] Testi, M., Ballabio, M., Frontoni, E., Iannello, G., Moccia, S., Soda, P., & Vessio, G. (2022). MLOps: A taxonomy and a methodology. IEEE Access, 10, 61725–61747. [15] Yuan, J., Chen, C., Yang, W., Liu, M., Xia, J., & Liu, S. (2021). A survey of visual analytics techniques for machine learning. Computational Visual Media, 7(1), 3-36.
描述 碩士
國立政治大學
資訊管理學系
112356030
資料來源 http://thesis.lib.nccu.edu.tw/record/#G0112356030
資料類型 thesis
dc.contributor.advisor 蔡瑞煌<br>洪智鐸zh_TW
dc.contributor.advisor Tsaih, Rua-Huan<br>Hong, Chih-Duoen_US
dc.contributor.author (Authors) 江應翎zh_TW
dc.contributor.author (Authors) Chiang, Ying-Lingen_US
dc.creator (作者) 江應翎zh_TW
dc.creator (作者) Chiang, Ying-Lingen_US
dc.date (日期) 2025en_US
dc.date.accessioned 1-Sep-2025 15:04:54 (UTC+8)-
dc.date.available 1-Sep-2025 15:04:54 (UTC+8)-
dc.date.issued (上傳時間) 1-Sep-2025 15:04:54 (UTC+8)-
dc.identifier (Other Identifiers) G0112356030en_US
dc.identifier.uri (URI) https://nccur.lib.nccu.edu.tw/handle/140.119/159094-
dc.description (描述) 碩士zh_TW
dc.description (描述) 國立政治大學zh_TW
dc.description (描述) 資訊管理學系zh_TW
dc.description (描述) 112356030zh_TW
dc.description.abstract (摘要) 本論文探討如何透過視覺化在MLOps中實施負責任人工智慧(RAI)原則,以提升金融AI部署的公平性、解釋性、問責性和可靠性。本研究針對台灣金融業現況制定四項視覺化指導原則—資訊追溯性、邏輯解釋性、決策參與性和風險預警性。以四項視覺化指導原則為基礎,實作示範MLOps各階段提出五項RAI視覺化工具:用於稽核軌跡的追溯儀表板、數據準備階段偏差檢測的資料品質指標、模型驗證階段提升可解釋性的特徵重要性和模型解釋工具,以及部署決策的模型比較介面。這些工具在股價預測服務(PPaaS)系統中實作並應用於股價預測。研究採用標準與增加了RAI視覺化工具的使用者介面的比較評估。來自銀行、證券和保險業的金融專業人士參與研究,針對各工具支援特定RAI原則的程度提供李克特量表(Likert scale)評分和質性回饋。結果顯示量化評分有顯著改善。質性回饋證實RAI增強介面解決了標準平台在資訊追溯性、邏輯解釋性、決策參與性和風險預警性方面的關鍵缺口。本研究驗證了針對性視覺化介入能成功將抽象RAI原則操作化為實用工具,提升非AI專業使用者對AI驅動金融應用的理解、信任和決策品質,為透過視覺化設計實施負責任AI提供系統性指導原則。zh_TW
dc.description.abstract (摘要) This thesis explores how to implement Responsible Artificial Intelligence (RAI) principles in MLOps through visualization to enhance fairness, explainability, accountability, and reliability of financial AI applications, further assisting financial professionals without AI-related knowledge to comply with responsible AI principles when using MLOps. This study develops four visualization design guidelines tailored to Taiwan's financial industry context—Information Traceability, Logical Explainability, Decision Participation, and Risk Anticipation. Based on these four visualization guidelines, five RAI visualization tools are implemented across various MLOps modules: Traceability Dashboard, Data Quality Indicator, Feature Importance and Model Explanation, and Model Comparison. These tools are implemented and applied to stock price prediction in the Price Prediction as a Service (PPaaS) system. The research employs a comparative evaluation between standard user interfaces and those enhanced with RAI visualization tools. Financial professionals from banking, securities, and insurance industries participated in the study, providing Likert scale ratings and qualitative feedback on the degree to which each tool supports specific RAI principles. Results show significant improvements in quantitative scores. Qualitative feedback confirms that RAI-enhanced interfaces address critical gaps in the standard platform regarding information traceability, logical explainability, decision participation, and risk anticipation. This study validates that targeted visualization interventions can successfully operationalize abstract RAI principles into practical tools, enhancing non-technical users' understanding, trust, and decision-making quality in AI-driven financial applications, providing systematic guidelines for implementing responsible AI through visualization design.en_US
dc.description.tableofcontents 摘要 ii Abstract iii Chapter 1. Introduction 1 Chapter 2. Literature Review 4 2.1 Responsible Artificial Intelligence 4 2.2 Machine Learning Operations 4 2.2.1 Introduction of Machine Learning Operations 4 2.2.2 MLOps Levels 5 2.2.3 MLOps Steps 5 2.3 Visualization 7 2.3.1 Visualization in MLOps 7 2.3.2 Visualization of SHAP 9 2.4 Price Prediction as a Service 9 Chapter 3. Responsible AI in Financial Industry and The Proposed Visualization Guidelines 11 3.1 Responsible AI Principles 11 3.2 Responsible AI in Financial Industry 13 3.3 The Proposed Visualization Guidelines 17 3.3.1 Information Traceability 18 3.3.2 Logical Explainability 19 3.3.3 Decision Participation 20 3.3.4 Risk Anticipation 21 3.3.5 Guideline Integration 22 Chapter 4. Experiment Design 24 4.1 Research Objectives 24 4.2 Responsible AI Visualization Tools 24 4.2.1 Traceability Board in All Stages of MLOps 26 4.2.2 Data Quality Indicator in Data Preparation 27 4.2.3 Feature Importance and Model Explanation in Model Validation 29 4.2.4 Model Comparison Before Model Deployment 30 4.3 Questionnaire Design 32 Chapter 5. Experiment Results 34 5.1 Demographics 34 5.2 Analysis of Responsible AI Visualization Tools 36 5.2.1 Traceability Board and Accountability 36 5.2.2 Data Quality Indicator, Fairness and Reliability 39 5.2.3 Feature Importance, Model Explanation and Explainability 42 5.2.4 Model Comparison, Accountability and Reliability 46 5.3 Qualitative Results of Manager and Non-Manager Views 50 5.4 Quantitative Results Overview 51 Chapter 6. Conclusion 54 6.1 Conclusion 54 6.2 Limitations and Future Work 55 References 58 Appendix A: Questionnaire 60 Appendix B: Questionnaire Response Bilingual Comparison Table (Chinese-English) 71zh_TW
dc.format.extent 5677191 bytes-
dc.format.mimetype application/pdf-
dc.source.uri (資料來源) http://thesis.lib.nccu.edu.tw/record/#G0112356030en_US
dc.subject (關鍵詞) MLOpszh_TW
dc.subject (關鍵詞) 負責任人工智慧zh_TW
dc.subject (關鍵詞) 視覺化zh_TW
dc.subject (關鍵詞) MLOpsen_US
dc.subject (關鍵詞) Responsible artificial intelligence(RAI)en_US
dc.subject (關鍵詞) visualizationen_US
dc.title (題名) MLOps的視覺化負責任人工智慧 – Price Prediction as a Service的應用zh_TW
dc.title (題名) Implementing Responsible AI in Price Prediction as a Service through Visualization Arrangements in MLOpsen_US
dc.type (資料類型) thesisen_US
dc.relation.reference (參考文獻) [1] Alicioglu, G., & Sun, B. (2022). A survey of visual analytics for explainable artificial intelligence methods. Computers & Graphics, 102, 502-520. [2] Besinger, P., Vejnoska, D., & Ansari, F. (2024). Responsible AI (RAI) in manufacturing: A qualitative framework. Procedia Computer Science, 232, 813-822. [3] Davis, F. D. (1989). Technology acceptance model: TAM. Al-Suqri, MN, Al-Aufi, AS: Information Seeking Behavior and Technology Adoption, 205(219), 5. [4] Dignum, V. (2019). Responsible artificial intelligence: how to develop and use AI in a responsible way (Vol. 2156). Cham: Springer. [5] Financial Supervisory Commission R.O.C. (Taiwan). (2024). Guidelines on the use of artificial intelligence in the financial industry. [6] F⊘ lstad, A. (2007). Work-domain experts as evaluators: usability inspection of domain-specific work-support systems. International Journal of Human-Computer Interaction, 22(3), 217-245. [7] Jain, A., Patel, H., Nagalapatti, L., Gupta, N., Mehta, S., Guttula, S., ... & Munigala, V. (2020, August). Overview and importance of data quality for machine learning tasks. In Proceedings of the 26th ACM SIGKDD international conference on knowledge discovery & data mining (pp. 3561-3562). [8] Matsui, B. M., & Goya, D. H. (2022, May). MLOps: A Guide to its Adoption in the Context of Responsible AI. In Proceedings of the 1st Workshop on Software Engineering for Responsible AI (pp. 45-49). [9] Molnar, C., Freiesleben, T., König, G., Herbinger, J., Reisinger, T., Casalicchio, G., ... & Bischl, B. (2023, July). Relating the partial dependence plot and permutation feature importance to the data generating process. In World Conference on Explainable Artificial Intelligence (pp. 456-479). Cham: Springer Nature Switzerland. [10] Mujkanovic, F., Doskoč, V., Schirneck, M., Schäfer, P., & Friedrich, T. (2020). timeXplain--A Framework for Explaining the Predictions of Time Series Classifiers. arXiv preprint arXiv:2007.07606. [11] Pathak, S. (2022). Explainable AI for ML Ops. In World of Business with Data and Analytics (pp. 187-201). Singapore: Springer Nature Singapore. [12] Salama, K., Kazmierczak, J., & Schut, D. (2021, May). Practitioners Guide to MLOps: A framework for continuous delivery and automation of machine learning. Google Cloud. [13] Schlegel, U., & Keim, D. A. (2021, October). Time series model attribution visualizations as explanations. In 2021 IEEE Workshop on TRust and EXpertise in Visual Analytics (TREX) (pp. 27-31). IEEE. [14] Testi, M., Ballabio, M., Frontoni, E., Iannello, G., Moccia, S., Soda, P., & Vessio, G. (2022). MLOps: A taxonomy and a methodology. IEEE Access, 10, 61725–61747. [15] Yuan, J., Chen, C., Yang, W., Liu, M., Xia, J., & Liu, S. (2021). A survey of visual analytics techniques for machine learning. Computational Visual Media, 7(1), 3-36.zh_TW