Publications-Theses

Article View/Open

Publication Export

Google ScholarTM

NCCU Library

Citation Infomation

Related Publications in TAIR

題名 為人工智慧歡呼三聲? 探討台灣公務員對採用生成式AI的看法與態度
Three Cheers for AI? Exploring Views and Attitudes Toward the Adoption of Generative AI by Taiwanese Civil Servants
作者 林民偉
Lin, Min-Wei
貢獻者 陳敦源
Chen, Don-Yun
林民偉
Lin, Min-Wei
關鍵詞 人工智慧
生成式AI
使用採納
UTAUT模型
對人工智慧的信任
公共部門
公務員
Artificial intelligence
Generative AI
Use and adoption
UTAUT model
Trust in AI
Public sector
Civil servants
日期 2026
上傳時間 2-Mar-2026 12:30:08 (UTC+8)
摘要 生成式人工智慧(generative AI)的最新進展正加速 AI 工具在政府部門中的擴散,然而公共組織內的採用程度仍缺乏深入研究。本論文以臺灣文官體系作為在理論上具啟發性的研究場域,探討公務人員在日常工作中如何評估、採用並信任生成式 AI (如 ChatGPT、Copilot、Gemini)。本研究的核心目標在於辨識生成式 AI 何時成為可實際運用的工作工具,而非僅是一個被廣泛肯定的新科技,並據此檢視官僚組織情境下科技接受模型的理論意涵。 本研究以「整合型科技接受與使用理論」(UTAUT)為基礎,並針對生成式 AI 的特性加以調整。除了 UTAUT 的四個核心構念(績效期望、努力期望、社會影響、促成條件)外,本研究架構亦納入「對生成式 AI 的態度」與「對生成式 AI 的信任」作為與 AI 相關的延伸構念。在實證設計上,本研究採用順序解釋性混合研究設計。量化階段針對台灣公務員進行全國性線上問卷調查,並基於擴展後的 UTAUT 模型進行結構方程模型(SEM)分析。質化階段則透過 12 場半結構式訪談,探討受訪者在實務中如何使用生成式 AI,以及其如何權衡使用所帶來的效益與風險。 量化結果呈現一致的「意圖—使用之謎題」:行為意圖與自陳之生成式 AI 使用呈負向關聯,且 UTAUT 核心構念亦與行為意圖呈負向關聯,與理論預期相左。診斷分析顯示,這種反直覺的模式不易透過替代模型設定而消除。相較之下,信任與行為意圖呈正向關聯,但仍無法化解意圖與使用之間的反轉。訪談結果進一步釐清,為何在公部門組織中,生成式 AI 的採用可能偏離以意圖為核心的模型邏輯。受訪者普遍將生成式 AI 的使用描述為偶發且視任務而定,並強調即使一般使用者對生成式 AI 持正向態度或使用意圖,實際使用度仍伴隨明顯的成本。綜合而言,混合方法證據顯示:行為意圖反映了對生成式 AI 的廣泛認同,而實際使用則主要取決於任務需求、任務—科技契合度、事實查核負擔與組織限制。因此,當生成式 AI 的使用屬於自願性質,且各工作場域在使用的組織實踐、規則與規範上存在差異時,本論文進一步界定了 UTAUT 理論的適用範圍限制。
Recent advances in generative artificial intelligence (gen-AI) are accelerating the diffusion of AI-enabled tools across government, yet the extent of adoption in public organizations remains understudied. This dissertation examines how civil servants evaluate, use, and trust gen-AI chatbots (e.g., ChatGPT, Copilot, Gemini) in daily work, with Taiwan’s civil service providing a theoretically informative setting. The central goal is to identify when gen-AI becomes a practical tool rather than a broadly endorsed innovation, and what this implies for models of technology acceptance in bureaucratic organizations. The dissertation builds on the Unified Theory of Acceptance and Use of Technology (UTAUT) but adapts it to the distinctive properties of gen-AI. In addition to the four core UTAUT constructs (performance expectancy, effort expectancy, social influence, and facilitating conditions), the research framework incorporates attitudes toward gen-AI and trust in gen-AI as AI-relevant extensions that are frequently treated as central to AI acceptance. Empirically, the study employs a sequential explanatory mixed-methods design. The quantitative phase draws on a nationwide online survey of Taiwanese civil servants and estimates a structural equation model (SEM) based on the extended UTAUT model. The qualitative phase consists of 12 semi-structured interviews that examine how respondents use generative AI in practice and how they weigh its benefits and risks. The quantitative results reveal a consistent intention–use puzzle: behavioral intention is negatively associated with reported gen-AI use, and the core UTAUT constructs are also negatively associated with behavioral intention, contrary to theoretical expectations. Diagnostic analyses indicate that this counterintuitive pattern is not readily eliminated by alternative specifications. Trust, by contrast, is positively associated with behavioral intention, but it does not resolve the intention–use reversal. The interviews clarify why gen-AI adoption can deviate from intention-based models in public organizations. Respondents commonly describe gen-AI use as episodic and task-contingent, while also emphasizing practical costs associated with use despite favorable attitudes or stated intention to use gen-AI. Taken together, the mixed-method evidence suggests that behavioral intention captures broad approval of generative AI, whereas actual use is governed by task need, task–technology fit, fact-checking burdens, and organizational constraints. The dissertation therefore specifies boundary conditions for UTAUT-style models when the use of generative AI is discretionary, and organizational practices, rules, and norms regarding its use vary across work settings.
參考文獻 Ahn, M. J., and Chen, Y. C. (2022). Digital transformation toward AI-augmented public administration: The perception of government employees and the willingness to use AI in government. Government Information Quarterly, 39(2), 101664. https://doi.org/10.1016/j.giq.2021.101664. Ajzen, I. (1991). The theory of planned behavior. Organizational Behavior and Human Decision Processes, 50(2), 179-211. https://doi.org/10.1016/0749-5978(91)90020-T. Alikhani, M., Harris, B., and Patnaik, S. (2025). How are Americans using AI? Evidence from a nationwide survey. Brookings Institution. November 25. https://www.brookings.edu/articles/how-are-americans-using-ai-evidence-from-a-nationwide-survey/. Alon-Barkat, S., and Busuioc, M. (2023). Human–AI interactions in public sector decision making: “Automation bias” and “selective adherence” to algorithmic advice. Journal of Public Administration Research and Theory, 33(1), 153-169. https://doi.org/10.1093/jopart/muac007. Aoki, N. (2020). An experimental study of public trust in AI chatbots in the public sector. Government Information Quarterly, 37(4), 101490. https://doi.org/10.1016/j.giq.2020.101490. Ardoin, P. J., and Hicks, W. D. (2024). Fear and loathing: ChatGPT in the political science classroom. PS: Political Science & Politics, 57(4): 583-594. https://doi.org/10.1017/S1049096524000131. Balakrishnan, J., Abed, S. S., and Jones, P. (2022). The role of meta-UTAUT factors, perceived anthropomorphism, perceived intelligence, and social self-efficacy in chatbot-based services? Technological Forecasting and Social Change, 180, 121692. https://doi.org/10.1016/j.techfore.2022.121692. Benk, M., Kerstan, S., von Wangenheim, F., and Ferrario, A. (2025). Twenty-four years of empirical research on trust in AI: a bibliometric review of trends, overlooked issues, and future directions. AI & SOCIETY, 40, 2083-2106. https://doi.org/10.1007/s00146-024-02059-y. Berryhill, J., Heang, K. K., Clogher, R., and McBride, K. (2019). Hello, world: Artificial intelligence and its use in the public sector. OECD Working Papers on Public Governance. https://doi.org/10.1787/19934351. Borins, S. (2002). Leadership and innovation in the public sector. Leadership & Organization Development Journal, 23(8), 467-476. https://doi.org/10.1108/01437730210449357. Braun, V., and Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3(2): 77-101. http://doi.org/10.1191/1478088706qp063oa. Braun, V., and V. Clarke. (2012). Thematic analysis. In APA Handbook of research methods in psychology Vol. 2: Research designs, edited by H. Cooper, pp. 57–71. Washington, DC: APA Books. Bright, J., Enock, F., Esnaashari, S., Francis, J., Hashem, Y., and Morgan, D. (2025). Generative AI is already widespread in the public sector: Evidence from a survey of UK public sector professionals. Digital Government: Research and Practice, 6(1), 1-13. https://doi.org/10.1145/3700140. Cai, L., Yuen, K. F., and Wang, X. (2023). Explore public acceptance of autonomous buses: An integrated model of UTAUT, TTF and trust. Travel Behaviour and Society, 31, 120-130. https://doi.org/10.1016/j.tbs.2022.11.010. Carmichael, M. (2025). People don’t trust AI tools, but use them anyway. Ipsos. September 16. https://www.ipsos.com/en-us/people-dont-trust-ai-tools-use-them-anyway. Center for Data Ethics and Innovation (CDEI) (2024). Public attitudes to data and AI: Tracker survey (Wave 3). Center for Data Ethics and Innovation. https://www.gov.uk/government/publications/public-attitudes-to-data-and-ai-tracker-survey-wave-3. Chen, D.-Y., Chu, P.-Y., Hsiao, N.-Y., Huang, T.-Y., Liao, Z.-P., and Tseng, H.-L. (2022). Government Digital Transformation: A Must-Read Introduction. Taipei, Taiwan: Wunan Books. Chen, D.-Y., Chang, P.-H., Liao, Z.-P., Huang, H., and Wang, C.-H. (2024). Digital Democratic Governance: A Public Reflection on Sovereignty, Innovation, and Social Development. Taipei, Taiwan: Wunan Books. Chen, G., Fan, J., and Azam, M. (2024). Exploring artificial intelligence (AI) chatbots adoption among research scholars using unified theory of acceptance and use of technology (UTAUT). Journal of Librarianship and Information Science, 57(4), 1205-1223. https://doi.org/10.1177/09610006241269189. Chen, T., Guo, W., Gao, X., and Liang, Z. (2021). AI-based self-service technology in public service delivery: User experience and influencing factors. Government Information Quarterly, 38(4), 101520. https://doi.org/10.1016/j.giq.2020.101520. Chen, T., Gascó-Hernandez, M., and Esteve, M. (2024). The adoption and implementation of artificial intelligence chatbots in public organizations: Evidence from US state governments. The American Review of Public Administration, 54(3), 255-270. https://doi.org/10.1177/02750740231200522. Cho, S., Hur, J.-Y., and Kim, D. (2025). Bridging trust in AI and its adoption: The role of organizational support in AI chatbot implementation in Korean government agencies. Government Information Quarterly, 42(4), 102081. https://doi.org/10.1016/j.giq.2025.102081. Choudhury, A., and Shamszare, H. (2023). Investigating the impact of user trust on the adoption and use of ChatGPT: Survey analysis. Journal of Medical Internet Research, 25, e47184. https://doi.org/10.2196/47184. Choung, H., David, P., and Ross, A. (2023). Trust in AI and its role in the acceptance of AI technologies. International Journal of Human–Computer Interaction, 39(9), 1727-1739. https://doi.org/10.1080/10447318.2022.2050543. Chung, L.-H. and Chung, J. (2023). Executive Yuan limits AI use in agencies. Taipei Times. September 1. https://www.taipeitimes.com/News/taiwan/archives/2023/09/01/2003805597. Coleman J. (1990). The foundations of social theory. Cambridge, MA: Harvard University Press. Compeau, D. R., and Higgins, C. A. (1995a). Computer self-efficacy: Development of a measure and initial test. MIS Quarterly, 189-211. https://doi.org/10.2307/249688. Compeau, D. R., and Higgins, C. A. (1995b). Application of social cognitive theory to training for computer skills. Information Systems Research, 6(2), 118-143. https://doi.org/10.1287/isre.6.2.118. Council of the European Union (2023). ChatGPT in the public sector- overhyped or overlooked? Council of the European Union. April 24. https://www.consilium.europa.eu/media/63818/art-paper-chatgpt-in-the-public-sector-overhyped-or-overlooked-24-april-2023_ext.pdf. Creswell, J. W. (2015). A concise introduction to mixed methods research. Thousand Oaks, CA: Sage publications. Creswell, J. W., and Creswell, J. D. (2022). Research design: Qualitative, quantitative, and mixed methods approaches. Sixth Edition. Thousand Oaks, CA: Sage publications. Creswell, J. W., and Plano Clark, V. L. (2011). Designing and conducting mixed methods research. Second Edition. Thousand Oaks, CA: Sage publications. Daly, S. J., Wiewiora, A., and Hearn, G. (2025). Shifting attitudes and trust in AI: Influences on organizational AI adoption. Technological Forecasting and Social Change, 215, 124108. https://doi.org/10.1016/j.techfore.2025.124108. Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly, 13(3), 319-340. https://doi.org/10.2307/249008. Davis, F. D., Bagozzi, R. P., and Warshaw, P. R. (1989). User acceptance of computer technology: A comparison of two theoretical models. Management Science, 35(8), 982-1003. https://doi.org/10.1287/mnsc.35.8.982. Davis, F. D., Bagozzi, R. P., and Warshaw, P. R. (1992). Extrinsic and intrinsic motivation to use computers in the workplace. Journal of Applied Social Psychology, 2(2), 95-113. https://doi.org/10.1111/j.1559-1816.1992.tb00945.x. Desouza, K. (2018). Delivering artificial intelligence in government: Challenges and opportunities. IBM Center for The Business of Government. https://www.businessofgovernment.org/report/delivering-artificial-intelligence-government-challenges-and-opportunities. Dunleavy, P., Margetts, H., Bastow, S., and Tinkler, J. (2006). New public management is dead—long live digital-era governance. Journal of Public Administration Research and Theory, 16(3), 467-494. https://doi.org/10.1093/jopart/mui057. Dwivedi, Y. K., Rana, N. P., Jeyaraj, A., Clement, M., and Williams, M. D. (2019). Re-examining the unified theory of acceptance and use of technology (UTAUT): Towards a revised theoretical model. Information Systems Frontiers, 21, 719-734. https://doi.org/10.1007/s10796-017-9774-y. Eagly, A. H., and Chaiken, S. (1993). The psychology of attitudes. Fort Worth, TX: Harcourt Brace Jovanovich College Publishers. Eggers, W. D., Schatsky, D., and Viechnicki, P. (2017). AI-augmented government. Using cognitive technologies to redesign public sector work. Deloitte. https://www2.deloitte.com/us/en/insights/focus/cognitive-technologies/artificial-intelligence-government.html. Ernst & Young (2024). EY Pulse Survey: insights into the integration of AI in government. Ernst & Young. August. https://www.ey.com/en_us/industries/government-public-sector/insights-into-the-integration-of-ai-in-government. Everington, K. (2024). Nvidia CEO says ‘T-AI-WAN’ will help world build AI infrastructure. Taiwan News. June 7. https://taiwannews.com.tw/news/5885580. Everington, K. (2025). Taiwan passes AI Basic Act. Taiwan News. December 23. https://www.taiwannews.com.tw/news/6270744. Executive Yuan (2023). Guidelines for the Executive Yuan and affiliated agencies on using generative artificial intelligence (行政院及所屬機關(構)使用生成式AI參考指引). Executive Yuan. August 31. https://www.ey.gov.tw/File/99391880264BBB73. Executive Yuan (2025a). Cultivating public sector AI talent for smart government. June 13. https://english.ey.gov.tw/News3/9E5540D592A5FECD/7afed842-0d86-4d59-9144-0269b120ece6. Executive Yuan (2025b). The Executive Yuan convened a meeting of the Smart Nation Promotion Task Force; Premier Cho expressed hope that a new ‘Ten Major AI Infrastructure Projects’ initiative will allow Taiwan to reinvent itself and achieve a comprehensive transformation (行政院召開智慧國家推動小組會議 卓揆盼AI新十大建設 讓臺灣脫胎換骨、全面轉型). July 3. https://www.ey.gov.tw/Page/9277F759E41CCD91/c18c1f19-47ec-4907-932c-4d50d96df8e7. Executive Yuan (2025c). Executive Yuan approves draft bill for basic law on AI. August 28. https://english.ey.gov.tw/Page/61BF20C3E89B856/89da216e-5741-43e4-aac8-af4551a21499. Executive Yuan (2025d). A proposed basic law for AI. September 12. https://english.ey.gov.tw/News3/9E5540D592A5FECD/dfa7107b-f323-4961-a31b-873bbc12ea47. Fishbein, M, and Ajzen, I. (1975). Belief, attitude, intention and behavior: An introduction to theory and research. Reading, MA: Addison-Wesley. Fornell, C., and Larcker, D. F. (1981). Evaluating structural equation models with unobservable variables and measurement error. Journal of Marketing Research, 18(1), 39-50. https://doi.org/10.1177/002224378101800104. Frisch‐Aviram, N., Spanghero Lotta, G., and Jordão de Carvalho, L. (2024). “Chat‐up”: The role of competition in street‐level bureaucrats’ willingness to break technological rules and use generative pre‐trained transformers (GPTs). Public Administration Review. Early View. https://doi.org/10.1111/puar.13824. Gansser, O. A., and Reich, C. S. (2021). A new acceptance model for artificial intelligence with extensions to UTAUT2: An empirical study in three segments of application. Technology in Society, 65, 101535. https://doi.org/10.1016/j.techsoc.2021.101535. Gerlich, M. (2023). Perceptions and acceptance of artificial intelligence: A multi-dimensional study. Social Sciences, 12(9), 502. https://doi.org/10.3390/socsci12090502. Gesk, T. S., and Leyer, M. (2022). Artificial intelligence in public services: When and why citizens accept its usage. Government Information Quarterly, 39(3), 101704. https://doi.org/10.1016/j.giq.2022.101704. Giesecke, O. (2024). AI Adoption in the Public Sector: Outcomes from a Nationwide Survey. Oliver Giesecke’s Personal Website. https://papers.olivergiesecke.com/AISurveySlides.pdf. Gillespie, N., Lockey, S., Curtis, C., Pool, J., and Akbari, A. (2023). Trust in artificial intelligence: A global study. The University of Queensland and KPMG Australia. https://assets.kpmg.com/content/dam/kpmg/au/pdf/2023/trust-in-ai-global-insights-2023.pdf. Gillespie, N., Lockey, S., Ward, T., Macdade, A., and Hassed, G. (2025). Trust, attitudes and use of artificial intelligence: A global study 2025. The University of Melbourne and KPMG. DOI: 10.26188/28822919. https://kpmg.com/xx/en/our-insights/ai-and-technology/trust-attitudes-and-use-of-ai.html. Glikson, E., and Woolley, A. W. (2020). Human trust in artificial intelligence: Review of empirical research. Academy of Management Annals, 14(2), 627-660. https://doi.org/10.5465/annals.2018.0057. Global Views Magazine (2023). Global Views investigates: The 2023 survey on the impact of generative AI on the future of the workplace. Global Views Magazine. June. https://gvsrc.cwgv.com.tw/articles/index/14901/1. Goodhue, D. L., and Thompson, R. L. (1995). Task-technology fit and individual performance. MIS quarterly, 19(2), 213-236. Guenduez, A. A., and Mettler, T. (2023). Strategically constructed narratives on artificial intelligence: What stories are told in governmental artificial intelligence policies? Government Information Quarterly, 40(1), 101719. https://doi.org/10.1016/j.giq.2022.101719. Gursoy, D., Chi, O. H., Lu, L., and Nunkoo, R. (2019). Consumers acceptance of artificially intelligent (AI) device use in service delivery. International Journal of Information Management, 49, 157-169. https://doi.org/10.1016/j.ijinfomgt.2019.03.008. Ho, S. S., and Cheung, J. C. (2024). Trust in artificial intelligence, trust in engineers, and news media: Factors shaping public perceptions of autonomous drones through UTAUT2. Technology in Society, 77, 102533. https://doi.org/10.1016/j.techsoc.2024.102533. Hoff, K. A., and Bashir, M. (2015). Trust in automation: Integrating empirical evidence on factors that influence trust. Human Factors, 57(3), 407-434. Hooda, A., Gupta, P., Jeyaraj, A., Giannakis, M., and Dwivedi, Y. K. (2022). The effects of trust on behavioral intention and use behavior within e-government contexts. International Journal of Information Management, 67, 102553. https://doi.org/10.1016/j.ijinfomgt.2022.102553. Hu, L. T., and Bentler, P. M. (1999). Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structural Equation Modeling: A Multidisciplinary Journal, 6(1), 1-55. https://doi.org/10.1080/10705519909540118. Huang, H., and Chen, D.-Y. (2023). Public administration research on human and AI collaboration: A multilevel reflection on public sector organizational issues. Taiwanese Journal of Political Science, 96, 139-178. (In Chinese). https://doi.org/10.6166/TJPS.202306_(96).0004. Huang, H., Kim, K. C., Young, M. M., and Bullock, J. B. (2022). A matter of perspective: Differential evaluations of artificial intelligence between managers and staff in an experimental simulation. Asia Pacific Journal of Public Administration, 44(1), 47-65. https://doi.org/10.1080/23276665.2021.1945468. Huang, H., Tseng, K.-C., Liao, C.-P., and Chen, D.-Y. (2021). When AI joins the government: A reflection on AI application and public administration theory. Journal of Civil Service, 13(2), 91-114. (In Chinese). https://www.airitilibrary.com/Article/Detail?DocID=P20210507001-202111-202112300016-202112300016-91-114. Hujran, O., Al-Debei, M. M., Al-Adwan, A. S., Alarabiat, A., and Altarawneh, N. (2023). Examining the antecedents and outcomes of smart government usage: An integrated model. Government Information Quarterly, 40(1), 101783. https://doi.org/10.1016/j.giq.2022.101783. Ipsos (2023). Global views on A.I. Ipsos. July 2023. https://www.ipsos.com/sites/default/files/ct/news/documents/2023-07/Ipsos%20Global%20AI%202023%20Report-WEB_0.pdf. Ivankova, N. V., Creswell, J. W., and Stick, S. L. (2006). Using mixed-methods sequential explanatory design: From theory to practice. Field methods, 18(1), 3-20. https://doi.org/10.1177/1525822X05282260. Jeffares, S. (2020). The virtual public servant: Artificial intelligence and frontline work. Cham, Switzerland: Springer Nature. Kao, J.-H. (2024). How to turn Taiwan into an “AI Island? Lai Ching-te lists 3 steps: All civil servants must receive training. (如何讓台灣成為「AI智慧島」?賴清德今說明3步驟:公務人員都要受訓). FTV News. July 30. https://www.ftvnews.com.tw/news/detail/2024730W0241. Kasilingam, D. L. (2020). Understanding the attitude and intention to use smartphone chatbots for shopping. Technology in Society, 62, 101280. https://doi.org/10.1016/j.techsoc.2020.101280. Kelly, S., Kaye, S. A., and Oviedo-Trespalacios, O. (2023). What factors contribute to the acceptance of artificial intelligence? A systematic review. Telematics and Informatics, 77, 101925. https://doi.org/10.1016/j.tele.2022.101925. Kennedy, B., Yam, E., Kikuchi, E., Pula, I., and Fuentes, J. (2025). How Americans view AI and its impact on people and society. Pew Research. September 17. https://www.pewresearch.org/science/2025/09/17/how-americans-view-ai-and-its-impact-on-people-and-society/. Khechine, H., Lakhal, S., and Ndjambou, P. (2016). A meta‐analysis of the UTAUT model: Eleven years later. Canadian Journal of Administrative Sciences/Revue canadienne des sciences de l’administration, 33(2), 138-152. https://doi.org/10.1002/cjas.1381. Kim, S. (2009). Revising Perry’s measurement scale of public service motivation. The American Review of Public Administration, 39(2), 149-163. https://doi.org/10.1177/0275074008317681. Kim, S. (2011). Testing a revised measure of public service motivation: Reflective versus formative specification. Journal of Public Administration Research and Theory, 21(3), 521-546. https://doi.org/10.1093/jopart/muq048. Kleizen, B., Van Dooren, W., Verhoest, K., and Tan, E. (2023). Do citizens trust trustworthy artificial intelligence? Experimental evidence on the limits of ethical AI measures in government. Government Information Quarterly, 40(4), 101834. https://doi.org/10.1016/j.giq.2023.101834. Kline, R. B. (2016). Principles and practice of structural equation modeling. 4th Edition. New York: Guilford publications. Lee, J. D., and See, K. A. (2004). Trust in automation: Designing for appropriate reliance. Human Factors, 46(1), 50-80. https://doi.org/10.1518/hfes.46.1.50_30392. Lee, S., Jones-Jang, S. M., Chung, M., Kim, N., and Choi, J. (2024). Who is using ChatGPT and why? Extending the unified theory of acceptance and use of technology (UTAUT) model. Information Research: An International Electronic Journal, 29(1), 54-72. https://doi.org/10.47989/ir291647. Lee, T.-P., Chang, C.-Y., and Lee, C.-L. (2022). Unintentional discrimination in application of artificial intelligence to public policies: A systematic article review. Journal of Public Administration, 63, 1-49. (In Chinese). https://doi.org/10.30409/JPA.202209_(63).0001. Lin, L.-S. (2024). Major transformation of 210,000 civil servants: Streamlining work and upgrading services [21萬公務員大改造 工作精簡、服務升級]. Commonwealth Magazine (天下雜誌). November 13. https://www.cw.com.tw/article/5132644. Lin, L. (2025). About 1 in 5 U.S. workers now use AI in their job, up since last year. Pew Research Center. October 6. https://www.pewresearch.org/short-reads/2025/10/06/about-1-in-5-us-workers-now-use-ai-in-their-job-up-since-last-year/. Ma, X., and Huo, Y. (2023). Are users willing to embrace ChatGPT? Exploring the factors on the acceptance of chatbots from the perspective of AIDUA framework. Technology in Society, 75, 102362. https://doi.org/10.1016/j.techsoc.2023.102362. Maciejewski, M. (2017). To do more, better, faster and more cheaply: Using big data in public administration. International Review of Administrative Sciences, 83(1_suppl), 120-135. https://doi.org/10.1177/0020852316640058. Madan, R., and Ashok, M. (2023). AI adoption and diffusion in public administration: A systematic literature review and future research agenda. Government Information Quarterly, 40(1), 101774. https://doi.org/10.1016/j.giq.2022.101774. Martins, C., Oliveira, T., and Popovič, A. (2014). Understanding the Internet banking adoption: A unified theory of acceptance and use of technology and perceived risk application. International Journal of Information Management, 34(1), 1-13. https://doi.org/10.1016/j.ijinfomgt.2013.06.002. Mayer, R. C., Davis, J. H. and Schoorman, F. D. (1995). An integrative model or organizational trust. Academy of Management Review, 20(3), 709-734. https://doi.org/10.2307/258792. McGrath, C., Farazouli, A., and Cerratto-Pargman, T. (2025). Generative AI chatbots in higher education: a review of an emerging research area. Higher Education, 89, 1533-1549. https://doi.org/10.1007/s10734-024-01288-w. McKnight, D. H., Carter, M., Thatcher, J. B., and Clay, P. F. (2011). Trust in a specific technology: An investigation of its components and measures. ACM Transactions on Management Information Systems, 2(2), 1-25. https://doi.org/10.1145/1985347.1985353. Medaglia, R., Gil-Garcia, J. R., and Pardo, T. A. (2023). Artificial intelligence in government: Taking stock and moving forward. Social Science Computer Review, 41(1), 123-140. https://doi.org/10.1177/08944393211034087. Mehr, H. (2017). Artificial intelligence for citizen services and government. Harvard Ash Center Democratic Governance and Innovation. Harvard Kennedy School. https://ash.harvard.edu/wp-content/uploads/2024/02/artificial_intelligence_for_citizen_services.pdf. Mergel, I., Dickinson, H., Stenvall, J., and Gasco, M. (2024). Implementing AI in the public sector. Public Management Review, 1-14. https://doi.org/10.1080/14719037.2023.2231950. Mergel, I., Ganapati, S., and Whitford, A. B. (2021). Agile: A new way of governing. Public Administration Review, 81(1), 161-165. https://doi.org/10.1111/puar.13202. Mikalef, P., Lemmer, K., Schaefer, C., Ylinen, M., Fjørtoft, S. O., Torvatn, H. Y., Gupta, M., and Niehaves, B. (2023). Examining how AI capabilities can foster organizational performance in public organizations. Government Information Quarterly, 40(2), 101797. https://doi.org/10.1016/j.giq.2022.101797. Milakovich, M. E. (2022). Digital governance: Applying advanced technologies to improve public service. Second Edition. New York: Routledge. Ministry of Digital Affairs (MODA) (2024). Digital government program 2.0: Ministry of Digital Affairs collaborates with government agencies to drive digital innovation in public governance. Ministry of Digital Affairs. July 15. https://moda.gov.tw/en/press/press-releases/13098. Misra, S., Katz, B., Roberts, P., Carney, M., and Valdivia, I. (2024). Toward a person-environment fit framework for artificial intelligence implementation in the public sector. Government Information Quarterly, 41(3), 101962. https://doi.org/10.1016/j.giq.2024.101962. Moore, G. C., and Benbasat, I. (1991). Development of an instrument to measure the perceptions of adopting an information technology innovation. Information Systems Research, 2(3), 192-222. https://doi.org/10.1287/isre.2.3.192. Morse, J. M. (1991). Approaches to qualitative-quantitative methodological triangulation. Nursing Research, 40(2), 120-123. National Development Council (2020). Digital Government Program 2.0 of Taiwan (2021-2025). National Development Council. July. https://moda.gov.tw/digital-affairs/digital-service/operations/120. National Development Council (2025). Ten Major AI Infrastructure Projects Initiative: Building a Smarter New Way of Life for Everyone (AI新十大建設,打造全民智慧新生活). September 25. https://www.ndc.gov.tw/nc_14813_39387. Neudert, L. M., Knuutila, A., and Howard, P. N. (2020). Global attitudes towards AI, machine learning & automated decision making. Oxford Commission on AI and Good Governance. https://julietwaters.com/wp-content/uploads/2024/01/globalattitudestowardsaimachinelearning2020-1.pdf. Nunnally, J. C., and Bernstein, I. H. (1994). Psychometric Theory. New York: McGraw-Hill. OECD (2019). Artificial intelligence in society. Paris, France: OECD Publishing. https://www.oecd.org/en/publications/2019/06/artificial-intelligence-in-society_c0054fa1.html. O’Shaughnessy, M. R., Schiff, D. S., Varshney, L. R., Rozell, C. J., and Davenport, M. A. (2023). What governs attitudes toward artificial intelligence adoption and governance? Science and Public Policy, 50(2), 161-176. https://doi.org/10.1093/scipol/scac056. Partnership for Public Service (2019). More than meets AI: Assessing the impact of artificial intelligence on the work of government. IBM Center for the Business of Government. https://www.businessofgovernment.org/sites/default/files/More%20Than%20Meets%20AI.pdf. Perry, J. L. (1996). Measuring public service motivation: An assessment of construct reliability and validity. Journal of Public Administration Research and Theory, 6(1), 5-22. https://doi.org/10.1093/oxfordjournals.jpart.a024303. Petty, R. E., and Cacioppo, J. T. (1996). Attitudes and persuasion: Classic and contemporary approaches. Boulder, CO: Westview Press. Podsakoff, P. M., MacKenzie, S. B., Lee, J. Y., and Podsakoff, N. P. (2003). Common method biases in behavioral research: a critical review of the literature and recommended remedies. Journal of Applied Psychology, 88(5), 879-903. Reuters (2025). Taiwan plans AI projects to boost economy by $510 billion. July 23. https://www.reuters.com/world/asia-pacific/taiwan-plans-ai-projects-boost-economy-by-510-billion-2025-07-23/. Rizk, A., and Lindgren, I. (2025). Automated decision-making in public administration: Changing the decision space between public officials and citizens. Government Information Quarterly, 42(3), 102061. https://doi.org/10.1016/j.giq.2025.102061. Rogers, E. M. (1995). Diffusion of innovations (4th Edition). New York: The Free Press. Schepman, A., and Rodway, P. (2020). Initial validation of the general attitudes towards Artificial Intelligence Scale. Computers in Human Behavior Reports, 1, 100014. https://doi.org/10.1016/j.chbr.2020.100014. Schiff, D. S., Schiff, K. J., and Pierson, P. (2022). Assessing public value failure in government adoption of artificial intelligence. Public Administration, 100(3), 653-673. https://doi.org/10.1111/padm.12742. Senadheera, S., Yigitcanlar, T., Desouza, K.C., Mossberger, K., Corchado, J., Mehmood, R., Li, R.Y.M. and Cheong, P.H. (2025). Understanding chatbot adoption in local governments: A review and framework. Journal of Urban Technology, 32(3), 35-69. https://doi.org/10.1080/10630732.2023.2297665. Sheeran, P. (2002). Intention—behavior relations: a conceptual and empirical review. European Review of Social Psychology, 12(1), 1-36. https://doi.org/10.1080/14792772143000003. Silic, M., and Back, A. (2014). Shadow IT–A view from behind the curtain. Computers & Security, 45, 274-283. https://doi.org/10.1016/j.cose.2014.06.007. Sohn, K., and Kwon, O. (2020). Technology acceptance theories and factors influencing artificial Intelligence-based intelligent products. Telematics and Informatics, 47, 101324. https://doi.org/10.1016/j.tele.2019.101324. Söllner, M., Hoffmann, A., and Leimeister, J. M. (2016). Why different trust relationships matter for information systems users. European Journal of Information Systems, 25(3), 274-287. https://doi.org/10.1057/ejis.2015.17. Straub, V. J., Hashem, Y., Bright, J., Bhagwanani, S., Morgan, D., Francis, J., Esnaashari, S., and Margetts, H. (2024). AI for bureaucratic productivity: Measuring the potential of AI to help automate 143 million UK government transactions. ArXiv preprint. https://arxiv.org/abs/2403.14712. Taiwan Government Bureaucrats Survey (2023). Taiwan government bureaucrats survey 2023. National Chengchi University. Tashakkori, A., and Teddlie, C. (1998). Mixed methodology: Combining qualitative and quantitative approaches. Thousand Oaks, CA: Sage Publications. Taylor, S., and Todd, P. (1995). Assessing IT usage: The role of prior experience. MIS Quarterly, 19(4), 561-570. https://doi.org/10.2307/249633. Thiebes, S., Lins, S., and Sunyaev, A. (2021). Trustworthy artificial intelligence. Electronic Markets, 31, 447-464. https://doi.org/10.1007/s12525-020-00441-4. Thompson, R. L., Higgins, C. A., and Howell, J. M. (1991). Personal computing: Toward a conceptual model of utilization. MIS Quarterly, 15(1), 125-143. https://doi.org/10.2307/249443. Ting, Y.-J., and Lin, T.-Z. (2020). The exploration of the potential on artificial intelligence applications to improve public governance. Archives Semiannual, 19(2), 24-41. (In Chinese). https://www.airitilibrary.com/Article/Detail/P20190408001-202012-202101140011-202101140011-24-41. Ting, Y.-J., Huang, H., Lin, T.-L., and Chang, W.-H. (2023). Expanding Governance Capabilities: The Experience of AI Implementation in Taiwan. East Asian Policy, 15(02), 44-62. https://doi.org/10.1142/S1793930523000119. Tseng, K.-C., Chen, D.-Y., and Hu, L.-T. (2009). Promoting citizen-centered E-government initiatives: Vision or illusion? Journal of Public Administration, 33, 1-43. (In Chinese). https://doi.org/10.30409/JPA.200912_(33).0001. Tsfati, Y., and Cappella, J. N. (2003). Do people watch what they do not trust? Exploring the association between news media skepticism and exposure. Communication Research, 30(5), 504-529. https://doi.org/10.1177/0093650203253371. United States Governmental Accountability Office (U.S. GAO) (2025). Artificial Intelligence: Generative AI Use and Management at Federal Agencies. United States Governmental Accountability Office. July. Report GAO-25-107653. https://www.gao.gov/products/gao-25-107653. Valle-Cruz, D., Gil-Garcia, J. R., and Sandoval-Almazan, R. (2024). Artificial intelligence algorithms and applications in the public sector: a systematic literature review based on the PRISMA approach. In Y. Charalabidis, R. Medaglia, and C. van Noordt (eds.), Research Handbook on Public Management and Artificial Intelligence, pp. 8-26. Cheltenham, UK: Edward Elgar Publishing. Van Noordt, C., and Misuraca, G. (2022). Artificial intelligence for the public sector: Results of landscaping the use of AI in government across the European Union. Government Information Quarterly, 39(3), 101714. https://doi.org/10.1016/j.giq.2022.101714. Venkatesh, V. (2022). Adoption and use of AI tools: a research agenda grounded in UTAUT. Annals of Operations Research, 308(1), 641-652. https://doi.org/10.1007/s10479-020-03918-9. Venkatesh, V., Morris, M. G., Davis, G. B., and Davis, F. D. (2003). User acceptance of information technology: Toward a unified view. MIS Quarterly, 27(3), 425-478. https://doi.org/10.2307/30036540. Venkatesh, V., Thong, J. Y., and Xu, X. (2012). Consumer acceptance and use of information technology: Extending the unified theory of acceptance and use of technology. MIS Quarterly, 36(1), 157-178. https://doi.org/10.2307/41410412. Venkatesh, V., Thong, J. Y., Chan, F. K., and Hu, P. J. (2016). Managing citizens’ uncertainty in e-government services: The mediating and moderating roles of transparency and trust. Information Systems Research, 27(1), 87-111. https://doi.org/10.1287/isre.2015.0612. Viechnicki, P., and Eggers, W.D. (2017). How much time and money can AI save government? Cognitive technologies could free up hundreds of millions of public sector worker hours. Deloitte University Press. https://www2.deloitte.com/content/dam/insights/us/articles/3834_How-much-time-and-money-can-AI-save-government/DUP_How-much-time-and-money-can-AI-save-government.pdf. Vogl, T. M., Seidelin, C., Ganesh, B., and Bright, J. (2020). Smart technology and the emergence of algorithmic bureaucracy: Artificial intelligence in UK local authorities. Public Administration Review, 80(6), 946-961. https://doi.org/10.1111/puar.13286. Wang, C.-W., Hsu, B.-Y., and Chen, D.-Y. (2024). Chatbot applications in government frontline services: leveraging artificial intelligence and data governance to reduce problems and increase effectiveness. Asia Pacific Journal of Public Administration, 46(4), 488-511. Wang, Y.-F., Chen, Y.-C., and Chien, S.-Y. (2025). Citizens’ intention to follow recommendations from a government-supported AI-enabled system. Public Policy and Administration, 40(2), 372-400. https://doi.org/10.1177/09520767231176126. Wang, Y. Y., and Wang, Y. S. (2022). Development and validation of an artificial intelligence anxiety scale: An initial application in predicting motivated learning behavior. Interactive Learning Environments, 30(4), 619-634. https://doi.org/10.1080/10494820.2019.1674887. Whitford, A. B., and Lee, S. Y. (2015). Exit, voice, and loyalty with multiple exit options: Evidence from the US federal workforce. Journal of Public Administration Research and Theory, 25(2), 373-398. https://doi.org/10.1093/jopart/muu004. Wing, J. M. (2021). Trustworthy AI. Communications of the ACM, 64(10), 64-71. https://doi.org/10.1145/3448248. Wirtz, B. W., and Müller, W. M. (2019). An integrated artificial intelligence framework for public management. Public Management Review, 21(7), 1076-1100. https://doi.org/10.1080/14719037.2018.1549268. Wirtz, B. W., Weyerer, J. C., and Geyer, C. (2019). Artificial intelligence and the public sector—applications and challenges. International Journal of Public Administration, 42(7), 596-615. https://doi.org/10.1080/01900692.2018.1498103. Wirtz, B. W., Weyerer, J. C., and Sturm, B. J. (2020). The dark sides of artificial intelligence: An integrated AI governance framework for public administration. International Journal of Public Administration, 43(9), 818-829. https://doi.org/10.1080/01900692.2020.1749851. Wu, N., and Wu, P. Y. (2024). Surveying the impact of generative artificial intelligence on political science education. PS: Political Science & Politics, 57(4), 602-609. https://doi.org/10.1017/S1049096524000167. Yoo, S. J., Han, S. H., and Huang, W. (2012). The roles of intrinsic motivators and extrinsic motivators in promoting e-learning in the workplace: A case from South Korea. Computers in Human Behavior, 28(3), 942-950. https://doi.org/10.1016/j.chb.2011.12.015. Young, M. M., Bullock, J. B., and Lecy, J. D. (2019). Artificial discretion as a tool of governance: A framework for understanding the impact of artificial intelligence on public administration. Perspectives on Public Management and Governance, 2(4), 301-313. https://doi.org/10.1093/ppmgov/gvz014. Young, M. M., Himmelreich, J., Bullock, J. B., and Kim, K. C. (2019). Artificial intelligence and administrative evil. Perspectives on Public Management and Governance, 4(3), 244-258. https://doi.org/10.1093/ppmgov/gvab006. Zhang, B., and Dafoe, A. (2019). Artificial intelligence: American attitudes and trends. Oxford: Center for the Governance of AI, Future of Humanity Institute, University of Oxford. https://governanceai.github.io/US-Public-Opinion-Report-Jan-2019/us_public_opinion_report_jan_2019.pdf. Zhang, W., Zuo, N., He, W., Li, S., and Yu, L. (2021). Factors influencing the use of artificial intelligence in government: Evidence from China. Technology in Society, 66, 101675. https://doi.org/10.1016/j.techsoc.2021.101675. Zuiderwijk, A., Chen, Y.-C., and Salem, F. (2021). Implications of the use of artificial intelligence in public governance: A systematic literature review and a research agenda. Government Information Quarterly, 38(3), 101577. https://doi.org/10.1016/j.giq.2021.101577.
描述 博士
國立政治大學
公共行政學系
101256504
資料來源 http://thesis.lib.nccu.edu.tw/record/#G0101256504
資料類型 thesis
dc.contributor.advisor 陳敦源zh_TW
dc.contributor.advisor Chen, Don-Yunen_US
dc.contributor.author (Authors) 林民偉zh_TW
dc.contributor.author (Authors) Lin, Min-Weien_US
dc.creator (作者) 林民偉zh_TW
dc.creator (作者) Lin, Min-Weien_US
dc.date (日期) 2026en_US
dc.date.accessioned 2-Mar-2026 12:30:08 (UTC+8)-
dc.date.available 2-Mar-2026 12:30:08 (UTC+8)-
dc.date.issued (上傳時間) 2-Mar-2026 12:30:08 (UTC+8)-
dc.identifier (Other Identifiers) G0101256504en_US
dc.identifier.uri (URI) https://nccur.lib.nccu.edu.tw/handle/140.119/161866-
dc.description (描述) 博士zh_TW
dc.description (描述) 國立政治大學zh_TW
dc.description (描述) 公共行政學系zh_TW
dc.description (描述) 101256504zh_TW
dc.description.abstract (摘要) 生成式人工智慧(generative AI)的最新進展正加速 AI 工具在政府部門中的擴散,然而公共組織內的採用程度仍缺乏深入研究。本論文以臺灣文官體系作為在理論上具啟發性的研究場域,探討公務人員在日常工作中如何評估、採用並信任生成式 AI (如 ChatGPT、Copilot、Gemini)。本研究的核心目標在於辨識生成式 AI 何時成為可實際運用的工作工具,而非僅是一個被廣泛肯定的新科技,並據此檢視官僚組織情境下科技接受模型的理論意涵。 本研究以「整合型科技接受與使用理論」(UTAUT)為基礎,並針對生成式 AI 的特性加以調整。除了 UTAUT 的四個核心構念(績效期望、努力期望、社會影響、促成條件)外,本研究架構亦納入「對生成式 AI 的態度」與「對生成式 AI 的信任」作為與 AI 相關的延伸構念。在實證設計上,本研究採用順序解釋性混合研究設計。量化階段針對台灣公務員進行全國性線上問卷調查,並基於擴展後的 UTAUT 模型進行結構方程模型(SEM)分析。質化階段則透過 12 場半結構式訪談,探討受訪者在實務中如何使用生成式 AI,以及其如何權衡使用所帶來的效益與風險。 量化結果呈現一致的「意圖—使用之謎題」:行為意圖與自陳之生成式 AI 使用呈負向關聯,且 UTAUT 核心構念亦與行為意圖呈負向關聯,與理論預期相左。診斷分析顯示,這種反直覺的模式不易透過替代模型設定而消除。相較之下,信任與行為意圖呈正向關聯,但仍無法化解意圖與使用之間的反轉。訪談結果進一步釐清,為何在公部門組織中,生成式 AI 的採用可能偏離以意圖為核心的模型邏輯。受訪者普遍將生成式 AI 的使用描述為偶發且視任務而定,並強調即使一般使用者對生成式 AI 持正向態度或使用意圖,實際使用度仍伴隨明顯的成本。綜合而言,混合方法證據顯示:行為意圖反映了對生成式 AI 的廣泛認同,而實際使用則主要取決於任務需求、任務—科技契合度、事實查核負擔與組織限制。因此,當生成式 AI 的使用屬於自願性質,且各工作場域在使用的組織實踐、規則與規範上存在差異時,本論文進一步界定了 UTAUT 理論的適用範圍限制。zh_TW
dc.description.abstract (摘要) Recent advances in generative artificial intelligence (gen-AI) are accelerating the diffusion of AI-enabled tools across government, yet the extent of adoption in public organizations remains understudied. This dissertation examines how civil servants evaluate, use, and trust gen-AI chatbots (e.g., ChatGPT, Copilot, Gemini) in daily work, with Taiwan’s civil service providing a theoretically informative setting. The central goal is to identify when gen-AI becomes a practical tool rather than a broadly endorsed innovation, and what this implies for models of technology acceptance in bureaucratic organizations. The dissertation builds on the Unified Theory of Acceptance and Use of Technology (UTAUT) but adapts it to the distinctive properties of gen-AI. In addition to the four core UTAUT constructs (performance expectancy, effort expectancy, social influence, and facilitating conditions), the research framework incorporates attitudes toward gen-AI and trust in gen-AI as AI-relevant extensions that are frequently treated as central to AI acceptance. Empirically, the study employs a sequential explanatory mixed-methods design. The quantitative phase draws on a nationwide online survey of Taiwanese civil servants and estimates a structural equation model (SEM) based on the extended UTAUT model. The qualitative phase consists of 12 semi-structured interviews that examine how respondents use generative AI in practice and how they weigh its benefits and risks. The quantitative results reveal a consistent intention–use puzzle: behavioral intention is negatively associated with reported gen-AI use, and the core UTAUT constructs are also negatively associated with behavioral intention, contrary to theoretical expectations. Diagnostic analyses indicate that this counterintuitive pattern is not readily eliminated by alternative specifications. Trust, by contrast, is positively associated with behavioral intention, but it does not resolve the intention–use reversal. The interviews clarify why gen-AI adoption can deviate from intention-based models in public organizations. Respondents commonly describe gen-AI use as episodic and task-contingent, while also emphasizing practical costs associated with use despite favorable attitudes or stated intention to use gen-AI. Taken together, the mixed-method evidence suggests that behavioral intention captures broad approval of generative AI, whereas actual use is governed by task need, task–technology fit, fact-checking burdens, and organizational constraints. The dissertation therefore specifies boundary conditions for UTAUT-style models when the use of generative AI is discretionary, and organizational practices, rules, and norms regarding its use vary across work settings.en_US
dc.description.tableofcontents DEDICATION iii ACKNOWLEDGEMENTS v ABSTRACT vii 中文摘要 ix TABLE OF CONTENTS xi LIST OF TABLES xiv LIST OF FIGURES xv CHAPTER 1: INTRODUCTION 1 1.1 Research Background 1 1.1.1 The Emergence of AI in Public Administration 1 1.1.2 Taiwan’s Efforts to Modernize Its Civil Service Through AI 6 1.1.3 The Scope of This Study 9 1.2 Research Objectives 11 1.3 Research Questions 14 1.4 Overview of the Chapters 15 CHAPTER 2: LITERATURE REVIEW 18 2.1 Taiwan’s Digital Government Transformation 18 2.2 The Unified Theory of Acceptance and Use of Technology (UTAUT) and Attitudes toward Generative AI 26 2.3 Trust and Generative AI Adoption 36 2.4 Selected Cross-National Evidence on Generative AI Use in the Public Sector 40 2.5 Chapter Summary 43 CHAPTER 3: RESEARCH METHODS 46 3.1 Research Framework 46 3.2 Research Hypotheses 47 3.2.1 Performance Expectancy 47 3.2.2 Effort Expectancy 48 3.2.3 Social Influence 49 3.2.4 Facilitating Conditions 49 3.2.5 Job Replacement Anxiety 50 3.2.6 Attitudes Toward Generative AI Chatbots 51 3.2.7 Trust in Generative AI Chatbots 52 3.2.8 Behavioral Intention and Use 53 3.3 Measurement and Operationalization of Constructs 57 3.4 Research Design 62 3.5 Data and Sampling 68 3.5.1 Quantitative Phase: TGBS Survey of Taiwanese Civil Servants 68 3.5.2 Qualitative Phase: Semi-structured Interviews 69 CHAPTER 4: RESULTS 75 4.1 Quantitative Results 75 4.1.1 Sample Characteristics 75 4.1.2 Descriptive Statistics of Gen-AI Use 77 4.1.3 SEM Estimation Strategy and Decision Rules 83 4.1.4 Measurement Model 84 4.1.5 Structural Model 89 4.1.6 Diagnostic and Alternative Model Specifications 93 4.1.7 Section Summary 103 4.2 Qualitative Results 105 4.2.1 Theme 1. Trust in Generative AI: “Conditional Trust” and the Need for Additional Verification 106 4.2.2 Theme 2: Experience With Generative AI: Heterogeneous Use and Task-Driven Adoption 111 4.2.3 Theme 3. Job Anxiety: Replacement Fears Are Often Displaced Into “Competitiveness” and “Misuse” Concerns 115 4.2.4 Theme 4. Support for AI Development: Broad Endorsement with Governance Conditions and “Sovereign AI” Considerations 119 4.2.5 Theme 5. Explaining the Behavioral Intention–Use Puzzle: Need, Skills, and Expressive Responding 122 4.3: Integration and Discussion: Explaining the Intention–Use Puzzle 128 4.3.1 Suspect 1: Measurement Problems 128 4.3.2 Suspect 2: Theory Choice (i.e., UTAUT May Be Incomplete) 129 4.3.3 Suspect 3: Technology-Specific Properties of Generative AI 130 4.3.4 Suspect 4: The Public Sector Context 131 4.3.5 Integrated Implications 132 CHAPTER 5: CONCLUSION 134 5.1 Summary and Contributions of the Study 134 5.2 Limitations 136 5.3 Future Research and Practical Implications 138 REFERENCES 142 APPENDIX 166 A. 行政院及所屬機關(構)使用生成式AI參考指引 (Guidelines for the Executive Yuan and Affiliated Agencies on Using Generative Artificial Intelligence) 166 B. Codebook and Operationalization of All Survey Questions and Items from the 2025 Taiwan Government Bureaucrat Survey-Wave 10 168 C. Interview Protocol for Generative AI Use in Taiwanese Civil Service 176 D. Descriptive Statistics for All Variables 180 E. Response to Dissertation Committee Members’ Suggestions 183zh_TW
dc.format.extent 2399282 bytes-
dc.format.mimetype application/pdf-
dc.source.uri (資料來源) http://thesis.lib.nccu.edu.tw/record/#G0101256504en_US
dc.subject (關鍵詞) 人工智慧zh_TW
dc.subject (關鍵詞) 生成式AIzh_TW
dc.subject (關鍵詞) 使用採納zh_TW
dc.subject (關鍵詞) UTAUT模型zh_TW
dc.subject (關鍵詞) 對人工智慧的信任zh_TW
dc.subject (關鍵詞) 公共部門zh_TW
dc.subject (關鍵詞) 公務員zh_TW
dc.subject (關鍵詞) Artificial intelligenceen_US
dc.subject (關鍵詞) Generative AIen_US
dc.subject (關鍵詞) Use and adoptionen_US
dc.subject (關鍵詞) UTAUT modelen_US
dc.subject (關鍵詞) Trust in AIen_US
dc.subject (關鍵詞) Public sectoren_US
dc.subject (關鍵詞) Civil servantsen_US
dc.title (題名) 為人工智慧歡呼三聲? 探討台灣公務員對採用生成式AI的看法與態度zh_TW
dc.title (題名) Three Cheers for AI? Exploring Views and Attitudes Toward the Adoption of Generative AI by Taiwanese Civil Servantsen_US
dc.type (資料類型) thesisen_US
dc.relation.reference (參考文獻) Ahn, M. J., and Chen, Y. C. (2022). Digital transformation toward AI-augmented public administration: The perception of government employees and the willingness to use AI in government. Government Information Quarterly, 39(2), 101664. https://doi.org/10.1016/j.giq.2021.101664. Ajzen, I. (1991). The theory of planned behavior. Organizational Behavior and Human Decision Processes, 50(2), 179-211. https://doi.org/10.1016/0749-5978(91)90020-T. Alikhani, M., Harris, B., and Patnaik, S. (2025). How are Americans using AI? Evidence from a nationwide survey. Brookings Institution. November 25. https://www.brookings.edu/articles/how-are-americans-using-ai-evidence-from-a-nationwide-survey/. Alon-Barkat, S., and Busuioc, M. (2023). Human–AI interactions in public sector decision making: “Automation bias” and “selective adherence” to algorithmic advice. Journal of Public Administration Research and Theory, 33(1), 153-169. https://doi.org/10.1093/jopart/muac007. Aoki, N. (2020). An experimental study of public trust in AI chatbots in the public sector. Government Information Quarterly, 37(4), 101490. https://doi.org/10.1016/j.giq.2020.101490. Ardoin, P. J., and Hicks, W. D. (2024). Fear and loathing: ChatGPT in the political science classroom. PS: Political Science & Politics, 57(4): 583-594. https://doi.org/10.1017/S1049096524000131. Balakrishnan, J., Abed, S. S., and Jones, P. (2022). The role of meta-UTAUT factors, perceived anthropomorphism, perceived intelligence, and social self-efficacy in chatbot-based services? Technological Forecasting and Social Change, 180, 121692. https://doi.org/10.1016/j.techfore.2022.121692. Benk, M., Kerstan, S., von Wangenheim, F., and Ferrario, A. (2025). Twenty-four years of empirical research on trust in AI: a bibliometric review of trends, overlooked issues, and future directions. AI & SOCIETY, 40, 2083-2106. https://doi.org/10.1007/s00146-024-02059-y. Berryhill, J., Heang, K. K., Clogher, R., and McBride, K. (2019). Hello, world: Artificial intelligence and its use in the public sector. OECD Working Papers on Public Governance. https://doi.org/10.1787/19934351. Borins, S. (2002). Leadership and innovation in the public sector. Leadership & Organization Development Journal, 23(8), 467-476. https://doi.org/10.1108/01437730210449357. Braun, V., and Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3(2): 77-101. http://doi.org/10.1191/1478088706qp063oa. Braun, V., and V. Clarke. (2012). Thematic analysis. In APA Handbook of research methods in psychology Vol. 2: Research designs, edited by H. Cooper, pp. 57–71. Washington, DC: APA Books. Bright, J., Enock, F., Esnaashari, S., Francis, J., Hashem, Y., and Morgan, D. (2025). Generative AI is already widespread in the public sector: Evidence from a survey of UK public sector professionals. Digital Government: Research and Practice, 6(1), 1-13. https://doi.org/10.1145/3700140. Cai, L., Yuen, K. F., and Wang, X. (2023). Explore public acceptance of autonomous buses: An integrated model of UTAUT, TTF and trust. Travel Behaviour and Society, 31, 120-130. https://doi.org/10.1016/j.tbs.2022.11.010. Carmichael, M. (2025). People don’t trust AI tools, but use them anyway. Ipsos. September 16. https://www.ipsos.com/en-us/people-dont-trust-ai-tools-use-them-anyway. Center for Data Ethics and Innovation (CDEI) (2024). Public attitudes to data and AI: Tracker survey (Wave 3). Center for Data Ethics and Innovation. https://www.gov.uk/government/publications/public-attitudes-to-data-and-ai-tracker-survey-wave-3. Chen, D.-Y., Chu, P.-Y., Hsiao, N.-Y., Huang, T.-Y., Liao, Z.-P., and Tseng, H.-L. (2022). Government Digital Transformation: A Must-Read Introduction. Taipei, Taiwan: Wunan Books. Chen, D.-Y., Chang, P.-H., Liao, Z.-P., Huang, H., and Wang, C.-H. (2024). Digital Democratic Governance: A Public Reflection on Sovereignty, Innovation, and Social Development. Taipei, Taiwan: Wunan Books. Chen, G., Fan, J., and Azam, M. (2024). Exploring artificial intelligence (AI) chatbots adoption among research scholars using unified theory of acceptance and use of technology (UTAUT). Journal of Librarianship and Information Science, 57(4), 1205-1223. https://doi.org/10.1177/09610006241269189. Chen, T., Guo, W., Gao, X., and Liang, Z. (2021). AI-based self-service technology in public service delivery: User experience and influencing factors. Government Information Quarterly, 38(4), 101520. https://doi.org/10.1016/j.giq.2020.101520. Chen, T., Gascó-Hernandez, M., and Esteve, M. (2024). The adoption and implementation of artificial intelligence chatbots in public organizations: Evidence from US state governments. The American Review of Public Administration, 54(3), 255-270. https://doi.org/10.1177/02750740231200522. Cho, S., Hur, J.-Y., and Kim, D. (2025). Bridging trust in AI and its adoption: The role of organizational support in AI chatbot implementation in Korean government agencies. Government Information Quarterly, 42(4), 102081. https://doi.org/10.1016/j.giq.2025.102081. Choudhury, A., and Shamszare, H. (2023). Investigating the impact of user trust on the adoption and use of ChatGPT: Survey analysis. Journal of Medical Internet Research, 25, e47184. https://doi.org/10.2196/47184. Choung, H., David, P., and Ross, A. (2023). Trust in AI and its role in the acceptance of AI technologies. International Journal of Human–Computer Interaction, 39(9), 1727-1739. https://doi.org/10.1080/10447318.2022.2050543. Chung, L.-H. and Chung, J. (2023). Executive Yuan limits AI use in agencies. Taipei Times. September 1. https://www.taipeitimes.com/News/taiwan/archives/2023/09/01/2003805597. Coleman J. (1990). The foundations of social theory. Cambridge, MA: Harvard University Press. Compeau, D. R., and Higgins, C. A. (1995a). Computer self-efficacy: Development of a measure and initial test. MIS Quarterly, 189-211. https://doi.org/10.2307/249688. Compeau, D. R., and Higgins, C. A. (1995b). Application of social cognitive theory to training for computer skills. Information Systems Research, 6(2), 118-143. https://doi.org/10.1287/isre.6.2.118. Council of the European Union (2023). ChatGPT in the public sector- overhyped or overlooked? Council of the European Union. April 24. https://www.consilium.europa.eu/media/63818/art-paper-chatgpt-in-the-public-sector-overhyped-or-overlooked-24-april-2023_ext.pdf. Creswell, J. W. (2015). A concise introduction to mixed methods research. Thousand Oaks, CA: Sage publications. Creswell, J. W., and Creswell, J. D. (2022). Research design: Qualitative, quantitative, and mixed methods approaches. Sixth Edition. Thousand Oaks, CA: Sage publications. Creswell, J. W., and Plano Clark, V. L. (2011). Designing and conducting mixed methods research. Second Edition. Thousand Oaks, CA: Sage publications. Daly, S. J., Wiewiora, A., and Hearn, G. (2025). Shifting attitudes and trust in AI: Influences on organizational AI adoption. Technological Forecasting and Social Change, 215, 124108. https://doi.org/10.1016/j.techfore.2025.124108. Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly, 13(3), 319-340. https://doi.org/10.2307/249008. Davis, F. D., Bagozzi, R. P., and Warshaw, P. R. (1989). User acceptance of computer technology: A comparison of two theoretical models. Management Science, 35(8), 982-1003. https://doi.org/10.1287/mnsc.35.8.982. Davis, F. D., Bagozzi, R. P., and Warshaw, P. R. (1992). Extrinsic and intrinsic motivation to use computers in the workplace. Journal of Applied Social Psychology, 2(2), 95-113. https://doi.org/10.1111/j.1559-1816.1992.tb00945.x. Desouza, K. (2018). Delivering artificial intelligence in government: Challenges and opportunities. IBM Center for The Business of Government. https://www.businessofgovernment.org/report/delivering-artificial-intelligence-government-challenges-and-opportunities. Dunleavy, P., Margetts, H., Bastow, S., and Tinkler, J. (2006). New public management is dead—long live digital-era governance. Journal of Public Administration Research and Theory, 16(3), 467-494. https://doi.org/10.1093/jopart/mui057. Dwivedi, Y. K., Rana, N. P., Jeyaraj, A., Clement, M., and Williams, M. D. (2019). Re-examining the unified theory of acceptance and use of technology (UTAUT): Towards a revised theoretical model. Information Systems Frontiers, 21, 719-734. https://doi.org/10.1007/s10796-017-9774-y. Eagly, A. H., and Chaiken, S. (1993). The psychology of attitudes. Fort Worth, TX: Harcourt Brace Jovanovich College Publishers. Eggers, W. D., Schatsky, D., and Viechnicki, P. (2017). AI-augmented government. Using cognitive technologies to redesign public sector work. Deloitte. https://www2.deloitte.com/us/en/insights/focus/cognitive-technologies/artificial-intelligence-government.html. Ernst & Young (2024). EY Pulse Survey: insights into the integration of AI in government. Ernst & Young. August. https://www.ey.com/en_us/industries/government-public-sector/insights-into-the-integration-of-ai-in-government. Everington, K. (2024). Nvidia CEO says ‘T-AI-WAN’ will help world build AI infrastructure. Taiwan News. June 7. https://taiwannews.com.tw/news/5885580. Everington, K. (2025). Taiwan passes AI Basic Act. Taiwan News. December 23. https://www.taiwannews.com.tw/news/6270744. Executive Yuan (2023). Guidelines for the Executive Yuan and affiliated agencies on using generative artificial intelligence (行政院及所屬機關(構)使用生成式AI參考指引). Executive Yuan. August 31. https://www.ey.gov.tw/File/99391880264BBB73. Executive Yuan (2025a). Cultivating public sector AI talent for smart government. June 13. https://english.ey.gov.tw/News3/9E5540D592A5FECD/7afed842-0d86-4d59-9144-0269b120ece6. Executive Yuan (2025b). The Executive Yuan convened a meeting of the Smart Nation Promotion Task Force; Premier Cho expressed hope that a new ‘Ten Major AI Infrastructure Projects’ initiative will allow Taiwan to reinvent itself and achieve a comprehensive transformation (行政院召開智慧國家推動小組會議 卓揆盼AI新十大建設 讓臺灣脫胎換骨、全面轉型). July 3. https://www.ey.gov.tw/Page/9277F759E41CCD91/c18c1f19-47ec-4907-932c-4d50d96df8e7. Executive Yuan (2025c). Executive Yuan approves draft bill for basic law on AI. August 28. https://english.ey.gov.tw/Page/61BF20C3E89B856/89da216e-5741-43e4-aac8-af4551a21499. Executive Yuan (2025d). A proposed basic law for AI. September 12. https://english.ey.gov.tw/News3/9E5540D592A5FECD/dfa7107b-f323-4961-a31b-873bbc12ea47. Fishbein, M, and Ajzen, I. (1975). Belief, attitude, intention and behavior: An introduction to theory and research. Reading, MA: Addison-Wesley. Fornell, C., and Larcker, D. F. (1981). Evaluating structural equation models with unobservable variables and measurement error. Journal of Marketing Research, 18(1), 39-50. https://doi.org/10.1177/002224378101800104. Frisch‐Aviram, N., Spanghero Lotta, G., and Jordão de Carvalho, L. (2024). “Chat‐up”: The role of competition in street‐level bureaucrats’ willingness to break technological rules and use generative pre‐trained transformers (GPTs). Public Administration Review. Early View. https://doi.org/10.1111/puar.13824. Gansser, O. A., and Reich, C. S. (2021). A new acceptance model for artificial intelligence with extensions to UTAUT2: An empirical study in three segments of application. Technology in Society, 65, 101535. https://doi.org/10.1016/j.techsoc.2021.101535. Gerlich, M. (2023). Perceptions and acceptance of artificial intelligence: A multi-dimensional study. Social Sciences, 12(9), 502. https://doi.org/10.3390/socsci12090502. Gesk, T. S., and Leyer, M. (2022). Artificial intelligence in public services: When and why citizens accept its usage. Government Information Quarterly, 39(3), 101704. https://doi.org/10.1016/j.giq.2022.101704. Giesecke, O. (2024). AI Adoption in the Public Sector: Outcomes from a Nationwide Survey. Oliver Giesecke’s Personal Website. https://papers.olivergiesecke.com/AISurveySlides.pdf. Gillespie, N., Lockey, S., Curtis, C., Pool, J., and Akbari, A. (2023). Trust in artificial intelligence: A global study. The University of Queensland and KPMG Australia. https://assets.kpmg.com/content/dam/kpmg/au/pdf/2023/trust-in-ai-global-insights-2023.pdf. Gillespie, N., Lockey, S., Ward, T., Macdade, A., and Hassed, G. (2025). Trust, attitudes and use of artificial intelligence: A global study 2025. The University of Melbourne and KPMG. DOI: 10.26188/28822919. https://kpmg.com/xx/en/our-insights/ai-and-technology/trust-attitudes-and-use-of-ai.html. Glikson, E., and Woolley, A. W. (2020). Human trust in artificial intelligence: Review of empirical research. Academy of Management Annals, 14(2), 627-660. https://doi.org/10.5465/annals.2018.0057. Global Views Magazine (2023). Global Views investigates: The 2023 survey on the impact of generative AI on the future of the workplace. Global Views Magazine. June. https://gvsrc.cwgv.com.tw/articles/index/14901/1. Goodhue, D. L., and Thompson, R. L. (1995). Task-technology fit and individual performance. MIS quarterly, 19(2), 213-236. Guenduez, A. A., and Mettler, T. (2023). Strategically constructed narratives on artificial intelligence: What stories are told in governmental artificial intelligence policies? Government Information Quarterly, 40(1), 101719. https://doi.org/10.1016/j.giq.2022.101719. Gursoy, D., Chi, O. H., Lu, L., and Nunkoo, R. (2019). Consumers acceptance of artificially intelligent (AI) device use in service delivery. International Journal of Information Management, 49, 157-169. https://doi.org/10.1016/j.ijinfomgt.2019.03.008. Ho, S. S., and Cheung, J. C. (2024). Trust in artificial intelligence, trust in engineers, and news media: Factors shaping public perceptions of autonomous drones through UTAUT2. Technology in Society, 77, 102533. https://doi.org/10.1016/j.techsoc.2024.102533. Hoff, K. A., and Bashir, M. (2015). Trust in automation: Integrating empirical evidence on factors that influence trust. Human Factors, 57(3), 407-434. Hooda, A., Gupta, P., Jeyaraj, A., Giannakis, M., and Dwivedi, Y. K. (2022). The effects of trust on behavioral intention and use behavior within e-government contexts. International Journal of Information Management, 67, 102553. https://doi.org/10.1016/j.ijinfomgt.2022.102553. Hu, L. T., and Bentler, P. M. (1999). Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structural Equation Modeling: A Multidisciplinary Journal, 6(1), 1-55. https://doi.org/10.1080/10705519909540118. Huang, H., and Chen, D.-Y. (2023). Public administration research on human and AI collaboration: A multilevel reflection on public sector organizational issues. Taiwanese Journal of Political Science, 96, 139-178. (In Chinese). https://doi.org/10.6166/TJPS.202306_(96).0004. Huang, H., Kim, K. C., Young, M. M., and Bullock, J. B. (2022). A matter of perspective: Differential evaluations of artificial intelligence between managers and staff in an experimental simulation. Asia Pacific Journal of Public Administration, 44(1), 47-65. https://doi.org/10.1080/23276665.2021.1945468. Huang, H., Tseng, K.-C., Liao, C.-P., and Chen, D.-Y. (2021). When AI joins the government: A reflection on AI application and public administration theory. Journal of Civil Service, 13(2), 91-114. (In Chinese). https://www.airitilibrary.com/Article/Detail?DocID=P20210507001-202111-202112300016-202112300016-91-114. Hujran, O., Al-Debei, M. M., Al-Adwan, A. S., Alarabiat, A., and Altarawneh, N. (2023). Examining the antecedents and outcomes of smart government usage: An integrated model. Government Information Quarterly, 40(1), 101783. https://doi.org/10.1016/j.giq.2022.101783. Ipsos (2023). Global views on A.I. Ipsos. July 2023. https://www.ipsos.com/sites/default/files/ct/news/documents/2023-07/Ipsos%20Global%20AI%202023%20Report-WEB_0.pdf. Ivankova, N. V., Creswell, J. W., and Stick, S. L. (2006). Using mixed-methods sequential explanatory design: From theory to practice. Field methods, 18(1), 3-20. https://doi.org/10.1177/1525822X05282260. Jeffares, S. (2020). The virtual public servant: Artificial intelligence and frontline work. Cham, Switzerland: Springer Nature. Kao, J.-H. (2024). How to turn Taiwan into an “AI Island? Lai Ching-te lists 3 steps: All civil servants must receive training. (如何讓台灣成為「AI智慧島」?賴清德今說明3步驟:公務人員都要受訓). FTV News. July 30. https://www.ftvnews.com.tw/news/detail/2024730W0241. Kasilingam, D. L. (2020). Understanding the attitude and intention to use smartphone chatbots for shopping. Technology in Society, 62, 101280. https://doi.org/10.1016/j.techsoc.2020.101280. Kelly, S., Kaye, S. A., and Oviedo-Trespalacios, O. (2023). What factors contribute to the acceptance of artificial intelligence? A systematic review. Telematics and Informatics, 77, 101925. https://doi.org/10.1016/j.tele.2022.101925. Kennedy, B., Yam, E., Kikuchi, E., Pula, I., and Fuentes, J. (2025). How Americans view AI and its impact on people and society. Pew Research. September 17. https://www.pewresearch.org/science/2025/09/17/how-americans-view-ai-and-its-impact-on-people-and-society/. Khechine, H., Lakhal, S., and Ndjambou, P. (2016). A meta‐analysis of the UTAUT model: Eleven years later. Canadian Journal of Administrative Sciences/Revue canadienne des sciences de l’administration, 33(2), 138-152. https://doi.org/10.1002/cjas.1381. Kim, S. (2009). Revising Perry’s measurement scale of public service motivation. The American Review of Public Administration, 39(2), 149-163. https://doi.org/10.1177/0275074008317681. Kim, S. (2011). Testing a revised measure of public service motivation: Reflective versus formative specification. Journal of Public Administration Research and Theory, 21(3), 521-546. https://doi.org/10.1093/jopart/muq048. Kleizen, B., Van Dooren, W., Verhoest, K., and Tan, E. (2023). Do citizens trust trustworthy artificial intelligence? Experimental evidence on the limits of ethical AI measures in government. Government Information Quarterly, 40(4), 101834. https://doi.org/10.1016/j.giq.2023.101834. Kline, R. B. (2016). Principles and practice of structural equation modeling. 4th Edition. New York: Guilford publications. Lee, J. D., and See, K. A. (2004). Trust in automation: Designing for appropriate reliance. Human Factors, 46(1), 50-80. https://doi.org/10.1518/hfes.46.1.50_30392. Lee, S., Jones-Jang, S. M., Chung, M., Kim, N., and Choi, J. (2024). Who is using ChatGPT and why? Extending the unified theory of acceptance and use of technology (UTAUT) model. Information Research: An International Electronic Journal, 29(1), 54-72. https://doi.org/10.47989/ir291647. Lee, T.-P., Chang, C.-Y., and Lee, C.-L. (2022). Unintentional discrimination in application of artificial intelligence to public policies: A systematic article review. Journal of Public Administration, 63, 1-49. (In Chinese). https://doi.org/10.30409/JPA.202209_(63).0001. Lin, L.-S. (2024). Major transformation of 210,000 civil servants: Streamlining work and upgrading services [21萬公務員大改造 工作精簡、服務升級]. Commonwealth Magazine (天下雜誌). November 13. https://www.cw.com.tw/article/5132644. Lin, L. (2025). About 1 in 5 U.S. workers now use AI in their job, up since last year. Pew Research Center. October 6. https://www.pewresearch.org/short-reads/2025/10/06/about-1-in-5-us-workers-now-use-ai-in-their-job-up-since-last-year/. Ma, X., and Huo, Y. (2023). Are users willing to embrace ChatGPT? Exploring the factors on the acceptance of chatbots from the perspective of AIDUA framework. Technology in Society, 75, 102362. https://doi.org/10.1016/j.techsoc.2023.102362. Maciejewski, M. (2017). To do more, better, faster and more cheaply: Using big data in public administration. International Review of Administrative Sciences, 83(1_suppl), 120-135. https://doi.org/10.1177/0020852316640058. Madan, R., and Ashok, M. (2023). AI adoption and diffusion in public administration: A systematic literature review and future research agenda. Government Information Quarterly, 40(1), 101774. https://doi.org/10.1016/j.giq.2022.101774. Martins, C., Oliveira, T., and Popovič, A. (2014). Understanding the Internet banking adoption: A unified theory of acceptance and use of technology and perceived risk application. International Journal of Information Management, 34(1), 1-13. https://doi.org/10.1016/j.ijinfomgt.2013.06.002. Mayer, R. C., Davis, J. H. and Schoorman, F. D. (1995). An integrative model or organizational trust. Academy of Management Review, 20(3), 709-734. https://doi.org/10.2307/258792. McGrath, C., Farazouli, A., and Cerratto-Pargman, T. (2025). Generative AI chatbots in higher education: a review of an emerging research area. Higher Education, 89, 1533-1549. https://doi.org/10.1007/s10734-024-01288-w. McKnight, D. H., Carter, M., Thatcher, J. B., and Clay, P. F. (2011). Trust in a specific technology: An investigation of its components and measures. ACM Transactions on Management Information Systems, 2(2), 1-25. https://doi.org/10.1145/1985347.1985353. Medaglia, R., Gil-Garcia, J. R., and Pardo, T. A. (2023). Artificial intelligence in government: Taking stock and moving forward. Social Science Computer Review, 41(1), 123-140. https://doi.org/10.1177/08944393211034087. Mehr, H. (2017). Artificial intelligence for citizen services and government. Harvard Ash Center Democratic Governance and Innovation. Harvard Kennedy School. https://ash.harvard.edu/wp-content/uploads/2024/02/artificial_intelligence_for_citizen_services.pdf. Mergel, I., Dickinson, H., Stenvall, J., and Gasco, M. (2024). Implementing AI in the public sector. Public Management Review, 1-14. https://doi.org/10.1080/14719037.2023.2231950. Mergel, I., Ganapati, S., and Whitford, A. B. (2021). Agile: A new way of governing. Public Administration Review, 81(1), 161-165. https://doi.org/10.1111/puar.13202. Mikalef, P., Lemmer, K., Schaefer, C., Ylinen, M., Fjørtoft, S. O., Torvatn, H. Y., Gupta, M., and Niehaves, B. (2023). Examining how AI capabilities can foster organizational performance in public organizations. Government Information Quarterly, 40(2), 101797. https://doi.org/10.1016/j.giq.2022.101797. Milakovich, M. E. (2022). Digital governance: Applying advanced technologies to improve public service. Second Edition. New York: Routledge. Ministry of Digital Affairs (MODA) (2024). Digital government program 2.0: Ministry of Digital Affairs collaborates with government agencies to drive digital innovation in public governance. Ministry of Digital Affairs. July 15. https://moda.gov.tw/en/press/press-releases/13098. Misra, S., Katz, B., Roberts, P., Carney, M., and Valdivia, I. (2024). Toward a person-environment fit framework for artificial intelligence implementation in the public sector. Government Information Quarterly, 41(3), 101962. https://doi.org/10.1016/j.giq.2024.101962. Moore, G. C., and Benbasat, I. (1991). Development of an instrument to measure the perceptions of adopting an information technology innovation. Information Systems Research, 2(3), 192-222. https://doi.org/10.1287/isre.2.3.192. Morse, J. M. (1991). Approaches to qualitative-quantitative methodological triangulation. Nursing Research, 40(2), 120-123. National Development Council (2020). Digital Government Program 2.0 of Taiwan (2021-2025). National Development Council. July. https://moda.gov.tw/digital-affairs/digital-service/operations/120. National Development Council (2025). Ten Major AI Infrastructure Projects Initiative: Building a Smarter New Way of Life for Everyone (AI新十大建設,打造全民智慧新生活). September 25. https://www.ndc.gov.tw/nc_14813_39387. Neudert, L. M., Knuutila, A., and Howard, P. N. (2020). Global attitudes towards AI, machine learning & automated decision making. Oxford Commission on AI and Good Governance. https://julietwaters.com/wp-content/uploads/2024/01/globalattitudestowardsaimachinelearning2020-1.pdf. Nunnally, J. C., and Bernstein, I. H. (1994). Psychometric Theory. New York: McGraw-Hill. OECD (2019). Artificial intelligence in society. Paris, France: OECD Publishing. https://www.oecd.org/en/publications/2019/06/artificial-intelligence-in-society_c0054fa1.html. O’Shaughnessy, M. R., Schiff, D. S., Varshney, L. R., Rozell, C. J., and Davenport, M. A. (2023). What governs attitudes toward artificial intelligence adoption and governance? Science and Public Policy, 50(2), 161-176. https://doi.org/10.1093/scipol/scac056. Partnership for Public Service (2019). More than meets AI: Assessing the impact of artificial intelligence on the work of government. IBM Center for the Business of Government. https://www.businessofgovernment.org/sites/default/files/More%20Than%20Meets%20AI.pdf. Perry, J. L. (1996). Measuring public service motivation: An assessment of construct reliability and validity. Journal of Public Administration Research and Theory, 6(1), 5-22. https://doi.org/10.1093/oxfordjournals.jpart.a024303. Petty, R. E., and Cacioppo, J. T. (1996). Attitudes and persuasion: Classic and contemporary approaches. Boulder, CO: Westview Press. Podsakoff, P. M., MacKenzie, S. B., Lee, J. Y., and Podsakoff, N. P. (2003). Common method biases in behavioral research: a critical review of the literature and recommended remedies. Journal of Applied Psychology, 88(5), 879-903. Reuters (2025). Taiwan plans AI projects to boost economy by $510 billion. July 23. https://www.reuters.com/world/asia-pacific/taiwan-plans-ai-projects-boost-economy-by-510-billion-2025-07-23/. Rizk, A., and Lindgren, I. (2025). Automated decision-making in public administration: Changing the decision space between public officials and citizens. Government Information Quarterly, 42(3), 102061. https://doi.org/10.1016/j.giq.2025.102061. Rogers, E. M. (1995). Diffusion of innovations (4th Edition). New York: The Free Press. Schepman, A., and Rodway, P. (2020). Initial validation of the general attitudes towards Artificial Intelligence Scale. Computers in Human Behavior Reports, 1, 100014. https://doi.org/10.1016/j.chbr.2020.100014. Schiff, D. S., Schiff, K. J., and Pierson, P. (2022). Assessing public value failure in government adoption of artificial intelligence. Public Administration, 100(3), 653-673. https://doi.org/10.1111/padm.12742. Senadheera, S., Yigitcanlar, T., Desouza, K.C., Mossberger, K., Corchado, J., Mehmood, R., Li, R.Y.M. and Cheong, P.H. (2025). Understanding chatbot adoption in local governments: A review and framework. Journal of Urban Technology, 32(3), 35-69. https://doi.org/10.1080/10630732.2023.2297665. Sheeran, P. (2002). Intention—behavior relations: a conceptual and empirical review. European Review of Social Psychology, 12(1), 1-36. https://doi.org/10.1080/14792772143000003. Silic, M., and Back, A. (2014). Shadow IT–A view from behind the curtain. Computers & Security, 45, 274-283. https://doi.org/10.1016/j.cose.2014.06.007. Sohn, K., and Kwon, O. (2020). Technology acceptance theories and factors influencing artificial Intelligence-based intelligent products. Telematics and Informatics, 47, 101324. https://doi.org/10.1016/j.tele.2019.101324. Söllner, M., Hoffmann, A., and Leimeister, J. M. (2016). Why different trust relationships matter for information systems users. European Journal of Information Systems, 25(3), 274-287. https://doi.org/10.1057/ejis.2015.17. Straub, V. J., Hashem, Y., Bright, J., Bhagwanani, S., Morgan, D., Francis, J., Esnaashari, S., and Margetts, H. (2024). AI for bureaucratic productivity: Measuring the potential of AI to help automate 143 million UK government transactions. ArXiv preprint. https://arxiv.org/abs/2403.14712. Taiwan Government Bureaucrats Survey (2023). Taiwan government bureaucrats survey 2023. National Chengchi University. Tashakkori, A., and Teddlie, C. (1998). Mixed methodology: Combining qualitative and quantitative approaches. Thousand Oaks, CA: Sage Publications. Taylor, S., and Todd, P. (1995). Assessing IT usage: The role of prior experience. MIS Quarterly, 19(4), 561-570. https://doi.org/10.2307/249633. Thiebes, S., Lins, S., and Sunyaev, A. (2021). Trustworthy artificial intelligence. Electronic Markets, 31, 447-464. https://doi.org/10.1007/s12525-020-00441-4. Thompson, R. L., Higgins, C. A., and Howell, J. M. (1991). Personal computing: Toward a conceptual model of utilization. MIS Quarterly, 15(1), 125-143. https://doi.org/10.2307/249443. Ting, Y.-J., and Lin, T.-Z. (2020). The exploration of the potential on artificial intelligence applications to improve public governance. Archives Semiannual, 19(2), 24-41. (In Chinese). https://www.airitilibrary.com/Article/Detail/P20190408001-202012-202101140011-202101140011-24-41. Ting, Y.-J., Huang, H., Lin, T.-L., and Chang, W.-H. (2023). Expanding Governance Capabilities: The Experience of AI Implementation in Taiwan. East Asian Policy, 15(02), 44-62. https://doi.org/10.1142/S1793930523000119. Tseng, K.-C., Chen, D.-Y., and Hu, L.-T. (2009). Promoting citizen-centered E-government initiatives: Vision or illusion? Journal of Public Administration, 33, 1-43. (In Chinese). https://doi.org/10.30409/JPA.200912_(33).0001. Tsfati, Y., and Cappella, J. N. (2003). Do people watch what they do not trust? Exploring the association between news media skepticism and exposure. Communication Research, 30(5), 504-529. https://doi.org/10.1177/0093650203253371. United States Governmental Accountability Office (U.S. GAO) (2025). Artificial Intelligence: Generative AI Use and Management at Federal Agencies. United States Governmental Accountability Office. July. Report GAO-25-107653. https://www.gao.gov/products/gao-25-107653. Valle-Cruz, D., Gil-Garcia, J. R., and Sandoval-Almazan, R. (2024). Artificial intelligence algorithms and applications in the public sector: a systematic literature review based on the PRISMA approach. In Y. Charalabidis, R. Medaglia, and C. van Noordt (eds.), Research Handbook on Public Management and Artificial Intelligence, pp. 8-26. Cheltenham, UK: Edward Elgar Publishing. Van Noordt, C., and Misuraca, G. (2022). Artificial intelligence for the public sector: Results of landscaping the use of AI in government across the European Union. Government Information Quarterly, 39(3), 101714. https://doi.org/10.1016/j.giq.2022.101714. Venkatesh, V. (2022). Adoption and use of AI tools: a research agenda grounded in UTAUT. Annals of Operations Research, 308(1), 641-652. https://doi.org/10.1007/s10479-020-03918-9. Venkatesh, V., Morris, M. G., Davis, G. B., and Davis, F. D. (2003). User acceptance of information technology: Toward a unified view. MIS Quarterly, 27(3), 425-478. https://doi.org/10.2307/30036540. Venkatesh, V., Thong, J. Y., and Xu, X. (2012). Consumer acceptance and use of information technology: Extending the unified theory of acceptance and use of technology. MIS Quarterly, 36(1), 157-178. https://doi.org/10.2307/41410412. Venkatesh, V., Thong, J. Y., Chan, F. K., and Hu, P. J. (2016). Managing citizens’ uncertainty in e-government services: The mediating and moderating roles of transparency and trust. Information Systems Research, 27(1), 87-111. https://doi.org/10.1287/isre.2015.0612. Viechnicki, P., and Eggers, W.D. (2017). How much time and money can AI save government? Cognitive technologies could free up hundreds of millions of public sector worker hours. Deloitte University Press. https://www2.deloitte.com/content/dam/insights/us/articles/3834_How-much-time-and-money-can-AI-save-government/DUP_How-much-time-and-money-can-AI-save-government.pdf. Vogl, T. M., Seidelin, C., Ganesh, B., and Bright, J. (2020). Smart technology and the emergence of algorithmic bureaucracy: Artificial intelligence in UK local authorities. Public Administration Review, 80(6), 946-961. https://doi.org/10.1111/puar.13286. Wang, C.-W., Hsu, B.-Y., and Chen, D.-Y. (2024). Chatbot applications in government frontline services: leveraging artificial intelligence and data governance to reduce problems and increase effectiveness. Asia Pacific Journal of Public Administration, 46(4), 488-511. Wang, Y.-F., Chen, Y.-C., and Chien, S.-Y. (2025). Citizens’ intention to follow recommendations from a government-supported AI-enabled system. Public Policy and Administration, 40(2), 372-400. https://doi.org/10.1177/09520767231176126. Wang, Y. Y., and Wang, Y. S. (2022). Development and validation of an artificial intelligence anxiety scale: An initial application in predicting motivated learning behavior. Interactive Learning Environments, 30(4), 619-634. https://doi.org/10.1080/10494820.2019.1674887. Whitford, A. B., and Lee, S. Y. (2015). Exit, voice, and loyalty with multiple exit options: Evidence from the US federal workforce. Journal of Public Administration Research and Theory, 25(2), 373-398. https://doi.org/10.1093/jopart/muu004. Wing, J. M. (2021). Trustworthy AI. Communications of the ACM, 64(10), 64-71. https://doi.org/10.1145/3448248. Wirtz, B. W., and Müller, W. M. (2019). An integrated artificial intelligence framework for public management. Public Management Review, 21(7), 1076-1100. https://doi.org/10.1080/14719037.2018.1549268. Wirtz, B. W., Weyerer, J. C., and Geyer, C. (2019). Artificial intelligence and the public sector—applications and challenges. International Journal of Public Administration, 42(7), 596-615. https://doi.org/10.1080/01900692.2018.1498103. Wirtz, B. W., Weyerer, J. C., and Sturm, B. J. (2020). The dark sides of artificial intelligence: An integrated AI governance framework for public administration. International Journal of Public Administration, 43(9), 818-829. https://doi.org/10.1080/01900692.2020.1749851. Wu, N., and Wu, P. Y. (2024). Surveying the impact of generative artificial intelligence on political science education. PS: Political Science & Politics, 57(4), 602-609. https://doi.org/10.1017/S1049096524000167. Yoo, S. J., Han, S. H., and Huang, W. (2012). The roles of intrinsic motivators and extrinsic motivators in promoting e-learning in the workplace: A case from South Korea. Computers in Human Behavior, 28(3), 942-950. https://doi.org/10.1016/j.chb.2011.12.015. Young, M. M., Bullock, J. B., and Lecy, J. D. (2019). Artificial discretion as a tool of governance: A framework for understanding the impact of artificial intelligence on public administration. Perspectives on Public Management and Governance, 2(4), 301-313. https://doi.org/10.1093/ppmgov/gvz014. Young, M. M., Himmelreich, J., Bullock, J. B., and Kim, K. C. (2019). Artificial intelligence and administrative evil. Perspectives on Public Management and Governance, 4(3), 244-258. https://doi.org/10.1093/ppmgov/gvab006. Zhang, B., and Dafoe, A. (2019). Artificial intelligence: American attitudes and trends. Oxford: Center for the Governance of AI, Future of Humanity Institute, University of Oxford. https://governanceai.github.io/US-Public-Opinion-Report-Jan-2019/us_public_opinion_report_jan_2019.pdf. Zhang, W., Zuo, N., He, W., Li, S., and Yu, L. (2021). Factors influencing the use of artificial intelligence in government: Evidence from China. Technology in Society, 66, 101675. https://doi.org/10.1016/j.techsoc.2021.101675. Zuiderwijk, A., Chen, Y.-C., and Salem, F. (2021). Implications of the use of artificial intelligence in public governance: A systematic literature review and a research agenda. Government Information Quarterly, 38(3), 101577. https://doi.org/10.1016/j.giq.2021.101577.zh_TW