| dc.contributor.advisor | 臧正運 | zh_TW |
| dc.contributor.advisor | Tsang, Cheng-Yun | en_US |
| dc.contributor.author (Authors) | 沈芸嫺 | zh_TW |
| dc.contributor.author (Authors) | Shen, Yun-Hsien | en_US |
| dc.creator (作者) | 沈芸嫺 | zh_TW |
| dc.creator (作者) | Shen, Yun-Hsien | en_US |
| dc.date (日期) | 2026 | en_US |
| dc.date.accessioned | 2-Mar-2026 11:37:53 (UTC+8) | - |
| dc.date.available | 2-Mar-2026 11:37:53 (UTC+8) | - |
| dc.date.issued (上傳時間) | 2-Mar-2026 11:37:53 (UTC+8) | - |
| dc.identifier (Other Identifiers) | G0111ZB1038 | en_US |
| dc.identifier.uri (URI) | https://nccur.lib.nccu.edu.tw/handle/140.119/161793 | - |
| dc.description (描述) | 碩士 | zh_TW |
| dc.description (描述) | 國立政治大學 | zh_TW |
| dc.description (描述) | 國際金融碩士學位學程 | zh_TW |
| dc.description (描述) | 111ZB1038 | zh_TW |
| dc.description.abstract (摘要) | 近年來,金融機構廣泛導入人工智慧(AI)於授信審查、風險管理、防制洗錢與詐欺偵測、客服與內部作業等情境,雖有助於提升效率與精準度,卻也伴隨「黑箱決策」可能弱化金融消費者權益保障與監理問責之疑慮。國際間逐漸將「透明性與可解釋性」視為可信任或負責任AI的核心要素之一,並透過風險導向架構,要求金融機構在高風險應用情境下提供更高程度的說明與文件化。
在此脈絡下,我國自《人工智慧基本法》提出「透明與可解釋」等治理原則,金管會陸續發布《金融業運用人工智慧(AI)之核心原則與相關推動政策》及《金融業運用人工智慧(AI)指引》,銀行、證券與保險三大公會亦訂定自律規範,初步建構出「上位原則—主管機關指引—產業自律」之AI治理架構。惟從實務觀察與現行規範內容來看,金融機構在落實AI可解釋性時,仍面臨如模型愈複雜愈難說明、跨國集團與外部模型資訊不對稱,以及對不同利害關係人應提供何種解釋與揭露等問題,顯示現行制度在風險分級、工具化支援與監理互動機制上仍有不足。
本報告雖以「可解釋性」為主要切入點,但亦因應國際框架之設計,在分析過程中適度討論透明性、公平性與問責性等相關治理原則,以完整呈現可解釋性在金融監理情境中的角色。
本實務報告採文獻分析與比較制度分析方法,首先盤點我國現行與AI可解釋性相關之法規、政策與自律規範,整理出在以原則為主的架構下,金融機構實務上落實可解釋性所面臨的關鍵議題與制度缺口;其次,以美國國家標準技術研究院(NIST)之AI Risk Management Framework(AI RMF)及其Playbook、Generative AI Profile,與新加坡金融管理局(MAS)之FEAT原則、Veritas Toolkit與AI Model Risk Management(AI MRM)為主要比較對象,分析其如何透過風險分級、分層說明、第三方與集團模型治理,以及工具化與自評機制等設計,將可解釋性從抽象原則具體化為可操作的治理流程。
綜合臺灣現況與國際經驗,本報告提出四項具漸進性與可行性的建議方向:一是建立簡化的「風險分級×可解釋性層級」概念架構,作為主管機關與金融機構討論「解釋是否足夠」時的共同語言;二是強化外部與集團模型之資訊取得與責任劃分,透過最低資訊需求清單與契約條款示例,使「最終責任在金融機構」得以具體落實;三是建議與公會及先行機構合作,建構非強制性的AI可解釋性「作業工具箱」與自評機制,提供情境分類、風險評估表、檢核表與文件範本,降低金融機構個別摸索成本;四是運用主題式檢查、申報與資料蒐集架構及監理科技(SupTech),逐步建構AI風險與可解釋性之監測指標與回饋機制。
本報告的實務意義在於:以我國既有《AI指引》與三大公會自律規範為基礎,透過建立風險分級、提供工具化支援與提升監理互動三個面向,提供一套可供主管機關與金融機構漸進採用的治理路徑。預期可在不顛覆現行制度架構的前提下,協助縮小「原則宣示」與「實務落地」之間的落差,並提升金融機構在使用AI時兼顧創新應用與金融消費者保護的能力。 | zh_TW |
| dc.description.abstract (摘要) | In recent years, financial institutions have widely adopted artificial intelligence (AI) in use cases such as credit underwriting, risk management, anti-money laundering (AML) and fraud detection, customer service, and internal operations. While these applications can enhance efficiency and accuracy, they also raise concerns that “black-box” decision-making may weaken protections for financial consumers and undermine supervisory accountability. Internationally, transparency and explainability have increasingly been treated as core elements of trustworthy or responsible AI, and risk-based approaches are being developed to require higher levels of explanation and documentation for high-risk applications.
Against this backdrop, Taiwan has proposed governance principles including “transparency and explainability” under the draft AI Basic Law. The Financial Supervisory Commission (FSC) has subsequently issued the Core Principles and Related Promotion Policies on the Use of AI in the Financial Sector, as well as the Guidelines on the Use of AI in the Financial Sector. In addition, the banking, securities, and insurance associations have established self-regulatory standards, thereby forming an initial AI governance framework consisting of “overarching principles—supervisory guidance—industry self-regulation.” However, based on practical observations and the content of existing rules, financial institutions still face significant challenges in implementing AI explainability, including: increasing difficulty of providing meaningful explanations as model complexity grows; information asymmetry in cross-border group models and externally sourced models; and uncertainty over what types and levels of explanations and disclosures should be provided to different stakeholders. These challenges indicate that the current framework remains insufficient in terms of risk-tiering, tool-based implementation support, and mechanisms for supervisory engagement.
Although this report takes “explainability” as its primary focus, it also discusses related governance principles—such as transparency, fairness, and accountability—where appropriate, in line with the design of major international frameworks, in order to present a more complete picture of the role of explainability in the context of financial supervision.
This practice-oriented report employs literature review and comparative institutional analysis. It first maps Taiwan’s current laws, policies, and self-regulatory standards related to AI explainability, and identifies key practical issues and institutional gaps faced by financial institutions under a largely principles-based regime. It then examines, as major comparators, the U.S. National Institute of Standards and Technology (NIST) AI Risk Management Framework (AI RMF) together with its Playbook and Generative AI Profile, as well as the Monetary Authority of Singapore (MAS) FEAT principles, the Veritas Toolkit, and the Artificial Intelligence Model Risk Management (AI MRM) publication. The analysis focuses on how these frameworks operationalize explainability through design features such as risk tiering, tiered explanation approaches, governance of third-party and group models, and tool-based implementation and self-assessment mechanisms, thereby translating explainability from an abstract principle into actionable governance processes.
Drawing on Taiwan’s current context and international experience, this report proposes four gradual and feasible recommendations. First, it recommends establishing a simplified “risk tiering × explainability levels” conceptual framework to provide a shared language for supervisors and financial institutions when assessing whether explanations are “sufficient.” Second, it proposes strengthening information access and responsibility allocation for external and group models, including illustrative minimum information requirements and contractual clauses, so that the principle of “ultimate responsibility rests with the financial institution” can be operationalized. Third, it recommends that the FSC collaborate with industry associations and early adopters to develop a non-mandatory “operational toolkit” and self-assessment mechanism for AI explainability, including use-case classification, risk assessment templates, checklists, and documentation samples, to reduce trial-and-error costs for individual institutions. Fourth, it proposes leveraging thematic reviews, reporting and data-collection structures, and supervisory technology (SupTech) to progressively establish monitoring indicators and feedback mechanisms for AI risks and explainability.
The practical contribution of this report lies in proposing an incremental governance pathway—grounded in Taiwan’s existing AI Guidelines and industry self-regulatory standards—through three reinforcing elements: risk-tiering, tool-based support, and enhanced supervisory engagement. Without overturning the current institutional framework, the proposed approach is expected to help narrow the gap between “principles on paper” and “implementation in practice,” and to strengthen financial institutions’ capacity to balance innovative AI use with financial consumer protection. | en_US |
| dc.description.tableofcontents | 第一章 緒論 1
1.1 研究背景與動機 1
1.2 研究問題與目的 2
1.3研究範圍、研究方法與名詞界定 4
1.4研究限制與未來研究建議 6
第二章 臺灣金融機構使用人工智慧可解釋性之現行規範與實務問題 7
2.1 臺灣金融監理機關與業界AI規範架構概述 7
2.1.1 人工智慧基本法與國家AI治理原則 7
2.1.2金融監督管理委員會相關政策與指引 9
2.1.3 銀行公會及金融業自律規範 13
2.2 現行規範下金融機構落實可解釋性之關鍵實務議題 16
2.3 實務問題之整理 19
第三章 國外金融機構使用人工智慧可解釋性規範與框架之比較分析 23
3.1美國:AI風險管理架構與可解釋性之相關規範 24
3.1.1 NIST AI風險管理框架之核心架構與可信任AI特徵 26
3.1.2 AI RMF Playbook:由原則架構走向具體行動與文件化要求 29
3.1.3 NIST Generative AI Profile:將AI RMF應用於生成式AI之風險與可解釋性指標 32
3.2 新加坡:金融業AI可解釋性原則與工具 36
3.2.1 FEAT原則及Explainability相關規範 38
3.2.2 Veritas Toolkit:由FEAT原則走向「可解釋性 × 風險分級」的工具化實作 41
3.2.3 AI Model Risk Management:將可解釋性納入模型風險管理主流 49
3.3 小結與比較:美國與新加坡風險導向可解釋性治理之共通元素與差異 55
第四章 臺灣金融機構使用人工智慧可解釋性監理架構之建議 59
4.1建議一:建立「風險分級 × 可解釋性層級」概念架構與示例 59
4.2建議二:強化外部與集團模型之資訊取得與責任劃分 62
4.3建議三:建立可解釋性作業工具箱與自評機制 64
4.4建議四:運用監理互動與SupTech建構AI風險與可解釋性監測架構 66
4.5 本章小結:政策發展路徑與本報告貢獻 68
第五章 結論與建議 69
5.1 研究發現總結 69
5.1.1 臺灣AI可解釋性監理現況與實務缺口 69
5.1.2 美國與新加坡框架對我國之啟示 69
5.2 政策與實務建議之總結 70
參考文獻 73
附錄 79 | zh_TW |
| dc.format.extent | 1808117 bytes | - |
| dc.format.mimetype | application/pdf | - |
| dc.source.uri (資料來源) | http://thesis.lib.nccu.edu.tw/record/#G0111ZB1038 | en_US |
| dc.subject (關鍵詞) | 金融監理 | zh_TW |
| dc.subject (關鍵詞) | 人工智慧 | zh_TW |
| dc.subject (關鍵詞) | 可解釋性 | zh_TW |
| dc.subject (關鍵詞) | 風險管理 | zh_TW |
| dc.subject (關鍵詞) | 監理科技 | zh_TW |
| dc.subject (關鍵詞) | Financial supervision | en_US |
| dc.subject (關鍵詞) | Artificial intelligence | en_US |
| dc.subject (關鍵詞) | Explainability | en_US |
| dc.subject (關鍵詞) | Risk management | en_US |
| dc.subject (關鍵詞) | Suptech | en_US |
| dc.title (題名) | 金融機構運用人工智慧可解釋性之監理趨勢與挑戰 | zh_TW |
| dc.title (題名) | REGULATORY TRENDS AND CHALLENGES IN THE EXPLAINABLE USE OF ARTIFICIAL INTELLIGENCE BY FINANCIAL INSTITUTIONS | en_US |
| dc.type (資料類型) | thesis | en_US |
| dc.relation.reference (參考文獻) | 一、中文部分
(一)政府資料
1、國家科學及技術委員會。(2025年12月24日)。人工智慧基本法三讀通過[新聞稿]。最後瀏覽日:2026年2月2日,取自https://www.nstc.gov.tw/folksonomy/detail/ed981806-1852-4b63-8dfd-9eea04157971?l=ch
2、金融監督管理委員會。(2025年5月20日)。金融機構運用人工智慧(AI)情形調查結果[研究與統計資料]。最後瀏覽日:2026年2月2日,取自https://www.fsc.gov.tw/ch/home.jsp?id=96&parentpath=0,2&mcustomize=news_view.jsp&dataserno=202505200001&dtable=News
3、金融監督管理委員會。(2020年8月27日)。金管會發布「金融科技發展路徑圖」。最後瀏覽日:2026年2月2日,取自https://www.fsc.gov.tw/ch/home.jsp?id=96&parentpath=0,2&mcustomize=news_view.jsp&dataserno=202008270008&dtable=News
4、金融監督管理委員會。(2023年8月15日)。金管會發布「金融科技發展路徑圖(2.0)」,期能實現更包容、公平[新聞稿]。最後瀏覽日:2026年2月2日,取自
https://www.fsc.gov.tw/ch/home.jsp?id=96&mcustomize=news_view.jsp&dataserno=202308150002&dtable=News
5、金融監督管理委員會。(2023年10月17日)。金管會公布金融業運用人工智慧(AI)之核心原則與相關推動政策[新聞稿]。最後瀏覽日:2026年2月2日,取自https://www.fsc.gov.tw/ch/home.jsp?id=96&mcustomize=news_view.jsp&dataserno=202310170002&dtable=News
6、金融監督管理委員會。(2024年6月20日)。金管會發布「金融業運用人工智慧(AI)指引」,期引導金融業負責任使用AI[新聞稿]。最後瀏覽日:2026年2月2日,取自https://www.fsc.gov.tw/ch/home.jsp?id=96&mcustomize=news_view.jsp&dataserno=202406200001&dtable=News
7、金融監督管理委員會。(2024年12月31日)。金管會於金融市場發展及創新處下新設「監理科技及研究應用組」,以強化監理科技及研究應用[新聞稿]。最後瀏覽日:2026年2月2日,取自
https://www.fsc.gov.tw/ch/home.jsp?id=96&mcustomize=news_view.jsp&dataserno=202412310001&dtable=News
8、金融監督管理委員會。(2025年3月27日)。金管會啟動「隱璞尋光計畫」,推動金融科技研發與實證[新聞稿]。最後瀏覽日:2026年2月2日,取自https://www.fsc.gov.tw/ch/home.jsp?id=96&mcustomize=news_view.jsp&dataserno=202503270003&dtable=News
二、英文部分
(一)政府與監理機關文件
1、Board of Governors of the Federal Reserve System. (2011, April 4). Supervisory guidance on model risk management (SR 11-7). https://www.federalreserve.gov/supervisionreg/srletters/sr1107.htm
2、Consumer Financial Protection Bureau. (2023, September 19). CFPB issues guidance on credit denials by lenders using artificial intelligence. https://www.consumerfinance.gov/about-us/newsroom/cfpb-issues-guidance-on-credit-denials-by-lenders-using-artificial-intelligence/
3、National Institute of Standards and Technology. (2023). Artificial Intelligence Risk Management Framework (AI RMF 1.0) (NIST AI 100-1). U.S. Department of Commerce. https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.100-1.pdf
4、National Institute of Standards and Technology. (n.d.). AI RMF playbook. NIST AI Resource Center. Retrieved February 2, 2026, from https://airc.nist.gov/airmf-resources/playbook/
5、National Institute of Standards and Technology. (2024). A Profile for Generative AI Risk Management (NIST AI 600-1). U.S. Department of Commerce. https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf
6、Office of Management and Budget. (2024, March 28). Advancing governance, innovation, and risk management for agency use of artificial intelligence (Memorandum M-24-10). Executive Office of the President. https://www.whitehouse.gov/wp-content/uploads/2024/03/M-24-10-Advancing-Governance-Innovation-and-Risk-Management-for-Agency-Use-of-Artificial-Intelligence.pdf
7、The White House. (2023, October 30). Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. https://www.intellectualpropertylawblog.com/archives/white-house-executive-order-on-ai-punts-on-ip-issues/
8、Monetary Authority of Singapore. (2018). Principles to Promote Fairness, Ethics, Accountability and Transparency (FEAT) in the Use of Artificial Intelligence and Data Analytics in Singapore’s Financial Sector. Singapore: Monetary Authority of Singapore.
https://www.mas.gov.sg/-/media/mas/news-and-publications/monographs-and-information-papers/feat-principles-updated-7-feb-19.pdf
9、Monetary Authority of Singapore, & Veritas Consortium. (2019). Veritas Document 1: Overview of Veritas Initiative and FEAT Principles Assessment Methodology. Singapore: Monetary Authority of Singapore.
https://www.mas.gov.sg/-/media/mas/news/media-releases/2021/veritas-document-1-feat-fairness-principles-assessment-methodology.pdf
10、Monetary Authority of Singapore, & Veritas Consortium. (2019). Veritas Document 2: Use Case Templates. Singapore: Monetary Authority of Singapore.
https://www.mas.gov.sg/-/media/MAS/News/Media-Releases/2021/Veritas-Document-2-FEAT-Fairness-Principles-Assessment-Case-Studies.pdf
11、Monetary Authority of Singapore, & Veritas Consortium. (2019). Veritas Document 3: Data and Metrics for Veritas Fairness Assessment Methodology . Singapore: Monetary Authority of Singapore.
https://www.mas.gov.sg/-/media/mas-media-library/news/media-releases/2022/veritas-document-3---feat-principles-assessment-methodology.pdf
12、Monetary Authority of Singapore. (2022). Veritas Document 3A: FEAT fairness principles assessment methodology. https://www.mas.gov.sg/-/media/MAS-Media-Library/news/media-releases/2022/Veritas-Document-3A---FEAT-Fairness-Principles-Assessment-Methodology.pdf
13、Monetary Authority of Singapore. (2022). Veritas Document 3B: FEAT ethics and accountability principles assessment methodology. https://www.mas.gov.sg/-/media/mas-media-library/news/media-releases/2022/veritas-document-3b---feat-ethics-and-accountability-principles-assessment-methodology.pdf
14、Monetary Authority of Singapore. (2022). Veritas Document 3C: FEAT transparency principles assessment methodology. https://www.mas.gov.sg/-/media/mas-media-library/news/media-releases/2022/veritas-document-3c---feat-transparency-principles-assessment-methodology.pdf
15、Monetary Authority of Singapore, & Veritas Consortium. (2020). Veritas Document 4: FEAT Fairness Assessment Methodology. Singapore: Monetary Authority of Singapore.
https://www.mas.gov.sg/-/media/mas-media-library/news/media-releases/2022/veritas-document-4---feat-principles-assessment-case-studies.pdf
16、Monetary Authority of Singapore. (2021). Veritas Document 1: FEAT Fairness Principles Assessment Methodology.
https://www.mas.gov.sg/-/media/mas/news/media-releases/2021/veritas-document-1-feat-fairness-principles-assessment-methodology.pdf
17、Monetary Authority of Singapore. (2021). Veritas Document 2: FEAT Fairness Principles Assessment Case Studies.
https://www.mas.gov.sg/-/media/MAS/News/Media-Releases/2021/Veritas-Document-2-FEAT-Fairness-Principles-Assessment-Case-Studies.pdf
18、Monetary Authority of Singapore. (2023). Document 5: From Methodologies to Integration.
https://www.mas.gov.sg/-/media/mas/news/media-releases/veritas-document-5---from-methodologies-to-integration.pdf
19、Monetary Authority of Singapore. (2023). Document 6: FEAT Principles Assessment Case Studies.
https://www.mas.gov.sg/-/media/mas/news/media-releases/veritas-document-6---feat-principles-assessment-case-studies.pdf
20、Monetary Authority of Singapore. (2024, December). Artificial intelligence model risk management: Observations from a thematic review [Information paper].
https://www.mas.gov.sg/-/media/mas-media-library/publications/monographs-or-information-paper/imd/2024/information-paper-on-ai-risk-management-final.pdf
(二)研究報告與學術文獻
1.Lundberg, S. M., & Lee, S.-I. (2017). A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems, 30, 4765–4774.
2.Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). “Why should I trust you?” Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD ’16) (pp. 1135–1144). | zh_TW |