Publications-Theses
Article View/Open
Publication Export
-
Google ScholarTM
NCCU Library
Citation Infomation
Related Publications in TAIR
題名 人機互動中的陪伴:LLM 聊天機器人在心理支持上的歷程分析
Companionship in Human–Computer Interaction: A Process Analysis of Psychological Support by LLM Chatbots作者 陳韋蓉
Rong, Chen Wei貢獻者 陳宜秀<br>廖峻鋒
YiSiu Chen<br>Chun-Feng Liao
陳韋蓉
Chen Wei Rong關鍵詞 大型語言模型
心理健康支持
人工智慧
支持性溝通理論
信任自動化理論
聊天機器人
情感支持
使用者體驗
回應風格
Large Language Model
LLM
Mental Health Support
Artificial Intelligence
AI
Supportive Communication Theory
Trust in Automation
Chatbot
User Experience
Response Style日期 2025 上傳時間 1-Sep-2025 16:50:22 (UTC+8) 摘要 本文旨在探討大型語言ế型(Large Language Model, LLM)為核心技術的聊天ỽ器人,在情緒支持與心理陪伴層面是否能展現近似輔導諮商時的支持效果。 隨著生成式人工智慧(artificial intelligence,AI)快速發展,具備自然語言處理能力的聊天ỽ器人日益被應用於心理健康領域,但其是否真能提供被理解、情感撫慰與信任建立的支持性互動,仍需深入驗證。本研究以支持性溝通理論(Supportive Communication Theory)為基礎,聚焦於聊天ỽ器人透過提示詞工程(Prompt engineering)ế擬情感支持、評價支持與資訊支持三種回應風格,是否能有效傳遞同理與關懷,進而提供情緒支持效能。本研究先進行了前導研究,邀請受過專ḋ訓練的諮商人員對聊天ỽ器人的支持回應進行評估及改進。正式研究則採用日記研究法(diary study),邀請參與者與經過前導研究驗證過的聊天ỽ器人連續互動十日,並於不同階⁓進行問卷與訪談,收集使用者感受與互動品質資料。研究結果以信任自動化理論(Trust in AutomationModel)與同理心量表架ṩ(ECSS 與 CARE)進行分析,從信任建立、回應適配、情緒感知與情感連結等面向進行評估,探究聊天ỽ器人是否能如擬人化輔助者般,承接使用者的情緒經驗與心理壓力。 研究結果透過紮根理論建ṩ出「信任建立」、「節奏調節」、「觀點轉化」三階⁓互動―程,提出「生成式 AI 心理支持互動―程ế型」,用以解釋人與生成式 AI 在情緒支持上的關係建ṩ過程。研究亦發現,同理心來自於系統記憶對話內容、主動提起過往經驗等連結,單一情緒ặ記更能引發被理解的感受,因 多數參與者仍認為 AI 難以取代真人的深層共感、經驗整合與價值理解,且部分人更多期待更具挑戰性與實用性的對話,並視之為支持一環,而非僅尋⃠陪伴與安慰性互動。本研究為 AI 諮商介面提供理論基礎與實務建議,指出未來應強化 AI 的支持性對話的設計面向。 參考文獻 中文參考資料 謝麗紅、陳亭妍、張瑋珊、陳雪均(2024)。導入探究與實作精神的人工智慧及其應用課程效果研究。教育心理學報,56(1),1-24。https://doi.org/10.6251/BEP.202409_56(1).0001 吳宗儒(2018)。諮商心理師運用同理之方式研究。﹝碩士論文。國立嘉義大學﹞臺灣博碩士論文知識加值系統。 https://hdl.handle.net/11296/49g885。 沈奕辰. (2025). 結合GPT模型與VITS於心理諮商輔導之應用. 淡江大學電機工程學系人工智慧機器人碩士班學位論文, 1–93. https://doi.org/10.6846/tku202400764 社團法人臺灣憂鬱症防治協會. (2025). 各年齡層憂鬱症的求助阻礙—協會通訊—社團法人臺灣憂鬱症防治協會. https://www.depression.org.tw/communication/info.asp?/167.html 胡幼慧. (2008). 質性研究—理論、方法及本土女性研究實例. https://www.books.com.tw/products/0010406897 財團法人台灣網路資訊中心. (2024). 2024 台灣網路報告. https://report.twnic.tw/2024/index.html 財團法人「張老師」基金會. (2024). 財團法人張老師基金會. https://www.1980.org.tw/news_show.php?news_id=579 陳向明. (2024). 社會科學質的研究. 五南官網. https://www.wunan.com.tw/bookdetail?NO=3448 陳德倫. (2023). 3次免費諮商,然後呢?推動「求助常態化」,擴大年輕世代心理健康支持網的新挑戰—報導者 The Reporter. https://www.twreporter.org/a/free-counceling-for-young-people-program 統計處. (2021, 十月 26). 世界心理健康日衛生福利統計通報(統計處) [文字]. 統計處; 統計處. https://dep.mohw.gov.tw/dos/cp-5112-63761-113.html 黃惠惠. (2005). 助人歷程與技巧(新增訂版). 博客來. https://www.books.com.tw/products/0010308213 葉寶玲, 郭文正, & 蔡佳容. (2024). 諮商系所學生使用聊天機器人經驗初探. 教育心理學報, 56(1), 45–72. https://doi.org/10.6251/BEP.202409_56(1).0003 廖本富. (2000). 同理心與焦點解決短期諮商. https://tpl.ncl.edu.tw/NclService/JournalContentDetail?SysId=A00003770 衛福部. (2020a). 心理師執行通訊心理諮商業務核准作業參考原則. https://www.twtcpa.org.tw/sites/default/files/field_files/news/%E5%BF%83%E7%90%86%E5%B8%AB%E5%9F%B7%E8%A1%8C%E9%80%9A%E8%A8%8A%E5%BF%83%E7%90%86%E8%AB%AE%E5%95%86%E6%A5%AD%E5%8B%99%E6%A0%B8%E5%87%86%E4%BD%9C%E6%A5%AD%E5%8F%83%E8%80%83%E5%8E%9F%E5%89%87%281090729%E4%BF%AE%E6%AD%A3%29.pdf 衛福部. (2020b). 壓力指數測量表│健康九九+網站. 健康九九+網站. https://health99.hpa.gov.tw/onlineQuiz/pressure 謝麗紅, 陳亭妍, 張瑋珊, & 陳雪均. (2024). 導入探究與實作精神的人工智慧及其應用課程效果研究. 教育心理學報, 56(1), 1–24. https://doi.org/10.6251/BEP.202409_56(1).0001 英文參考資料 Ackerman, S. J., & Hilsenroth, M. J. (2003). A review of therapist characteristics and techniques positively impacting the therapeutic alliance. Clinical Psychology Review, 23(1), 1–33. https://doi.org/10.1016/S0272-7358(02)00146-0 Altman, I., & Taylor, D. A. (1973). Social penetration: The development of interpersonal relationships (頁 viii, 212). Holt, Rinehart & Winston. ANTHROPIC. (2024). Introducing the Model Context Protocol. https://www.anthropic.com/news/model-context-protocol Asia Grace. (2024, 十月 9). Gen Zs, millennials are using AI for emotional support, calling it ‘more effective’ than a pet: Study. Yahoo Life. https://www.yahoo.com/lifestyle/gen-zs-millennials-using-ai-141641571.html Barrett-Lennard, G. T. (1981). The empathy cycle: Refinement of a nuclear concept. Journal of Counseling Psychology, 28(2), 91–100. https://doi.org/10.1037/0022-0167.28.2.91 Becky Inkster, Shubhankar Sarda, & Vinod Subramanian. (2018). JMIR mHealth and uHealth—An Empathy-Driven, Conversational Artificial Intelligence Agent (Wysa) for Digital Mental Well-Being: Real-World Data Evaluation Mixed-Methods Study. https://mhealth.jmir.org/2018/11/e12106/ Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., Agarwal, S., Herbert-Voss, A., Krueger, G., Henighan, T., Child, R., Ramesh, A., Ziegler, D., Wu, J., Winter, C., … Amodei, D. (2020). Language Models are Few-Shot Learners. Advances in Neural Information Processing Systems, 33, 1877–1901. https://proceedings.neurips.cc/paper/2020/hash/1457c0d6bfcb4967418bfb8ac142f64a-Abstract.html Burleson, B. R. (2003). The experience and effects of emotional support: What the study of cultural and gender differences can tell us about close relationships, emotion, and interpersonal communication. Personal Relationships, 10(1), 1–23. https://doi.org/10.1111/1475-6811.00033 Bylund, C. L., & Makoul, G. (2002). Empathic communication and gender in the physician–patient encounter. Patient Education and Counseling, 48(3), 207–216. https://doi.org/10.1016/S0738-3991(02)00173-8 Byron Reeves & Clifford Nass. (1996). The Media Equation: How People Treat Computers, Television, and New Media Like Real People and Pla. https://www.researchgate.net/publication/37705092_The_Media_Equation_How_People_Treat_Computers_Television_and_New_Media_Like_Real_People_and_Pla Cathy Mengying Fang, Auren R. Liu, Valdemar Danry, & Sandhini Agarwal. (2025). How AI and Human Behaviors Shape Psychosocial Effects of Chatbot Use: A Longitudinal Randomized Controlled Study. ResearchGate. https://www.researchgate.net/publication/390143219_How_AI_and_Human_Behaviors_Shape_Psychosocial_Effects_of_Chatbot_Use_A_Longitudinal_Randomized_Controlled_Study Chang, Y.-H., Lin, C.-Y., Liao, S.-C., Chen, Y.-Y., Shaw, F. F.-T., Hsu, C.-Y., Gunnell, D., & Chang, S.-S. (2023). Societal factors and psychological distress indicators associated with the recent rise in youth suicide in Taiwan: A time trend analysis. The Australian and New Zealand Journal of Psychiatry, 57(4), 537–549. https://doi.org/10.1177/00048674221108640 Chatgptsmodel.com. (n.d.). ChatGPT AI girlfriend. Retrieved April 3, 2025, from https://chatgpt.com Cohen, S., & Wills, T. A. (1985). Stress, social support, and the buffering hypothesis. Psychological Bulletin, 98(2), 310–357. https://doi.org/10.1037/0033-2909.98.2.310 Cristen Torrey, Susan R. Fussell, & Sara Kiesler. (2013). How a robot should give advice | IEEE Conference Publication | IEEE Xplore. https://ieeexplore.ieee.org/document/6483599 Dan Jurafsky & James H. Martin. (2019). Speech and Language Processing. https://web.stanford.edu/~jurafsky/slp3/ David Bakker, Nikolaos Kazantzis, Debra Rickwood, & Nikki Rickard. (2016). JMIR Mental Health—Mental Health Smartphone Apps: Review and Evidence-Based Recommendations for Future Developments. https://mental.jmir.org/2016/1/e7/ Derks, D., Fischer, A. H., & Bos, A. E. R. (2008). The role of emotion in computer-mediated communication: A review. Computers in Human Behavior, 24(3), 766–785. https://doi.org/10.1016/j.chb.2007.04.004 Duncan Cramer. (2001). Facilitativeness, conflict, demand for approval, self-esteem, and satisfaction with romantic relationships. https://psycnet.apa.org/record/2003-02058-010 Elliott, R., Bohart, A. C., Watson, J. C., & Greenberg, L. S. (2011). Empathy. Psychotherapy, 48(1), 43–49. https://doi.org/10.1037/a0022187 Elliott Robert, Watson, Jeanne C, Bohart Arthur C, & Murphy, David. (2018). Therapist empathy and client outcome: An updated meta-analysis. https://nottingham-repository.worktribe.com/output/921082/therapist-empathy-and-client-outcome-an-updated-meta-analysis?utm_source=chatgpt.com Felipe Thomaz, Carolina Salge, Elena Karahanna, & John Hulland. (2020). Learning from the Dark Web: Leveraging conversational agents in the era of hyper-privacy to enhance marketing | Journal of the Academy of Marketing Science. https://link.springer.com/article/10.1007/s11747-019-00704-3 Freitas, J. D., Castelo, N., Uguralp, A., & Uguralp, Z. (2024). Lessons From an App Update at Replika AI: Identity Discontinuity in Human-AI Relationships (No. arXiv:2412.14190). arXiv. https://doi.org/10.48550/arXiv.2412.14190 Fu, T. S.-T., Lee, C.-S., Gunnell, D., Lee, W.-C., & Cheng, A. T.-A. (2013). Changing trends in the prevalence of common mental disorders in Taiwan: A 20-year repeated cross-sectional survey. The Lancet, 381(9862), 235–241. https://doi.org/10.1016/S0140-6736(12)61264-1 Ghandeharioun, A., McDuff, D., Czerwinski, M., & Rowan, K. (2019). Towards Understanding Emotional Intelligence for Behavior Change Chatbots. 2019 8th International Conference on Affective Computing and Intelligent Interaction (ACII), 8–14. https://doi.org/10.1109/ACII.2019.8925433 Glaser, B., & Strauss, A. (2017). Discovery of Grounded Theory: Strategies for Qualitative Research. Routledge. https://doi.org/10.4324/9780203793206 Goldsmith, D. J. (2004). Communicating social support (頁 x, 207). Cambridge University Press. https://doi.org/10.1017/CBO9780511606984 Grant Packard & Jonah Berger. (2020). How Concrete Language Shapes Customer Satisfaction | Journal of Consumer Research | Oxford Academic. https://academic.oup.com/jcr/article/47/5/787/5873524?login=false Gross, J. J. (1998). Antecedent- and response-focused emotion regulation: Divergent consequences for experience, expression, and physiology. Journal of Personality and Social Psychology, 74(1), 224–237. https://doi.org/10.1037//0022-3514.74.1.224 Haque, M. D. R., & Rubya, S. (2023). An Overview of Chatbot-Based Mobile Mental Health Apps: Insights From App Description and User Reviews. JMIR mHealth and uHealth, 11(1), e44838. https://doi.org/10.2196/44838 Hatch, S. G., Goodman, Z. T., Vowels, L., Hatch, H. D., Brown, A. L., Guttman, S., Le, Y., Bailey, B., Bailey, R. J., Esplin, C. R., Harris, S. M., Jr, D. P. H., McLaughlin, M., O’Connell, P., Rothman, K., Ritchie, L., Jr, D. N. T., & Braithwaite, S. R. (2025). When ELIZA meets therapists: A Turing test for the heart and mind. PLOS Mental Health, 2(2), e0000145. https://doi.org/10.1371/journal.pmen.0000145 Herrando, C., & Constantinides, E. (n.d.). Emotional contagion: A brief overview and future directions. Frontiers in Psychology. Retrieved April 3, 2025, from https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2021.712606/full Høiland, C. G., Følstad, A., & Karahasanovic, A. (2020). Hi, can I help? Exploring how to design a mental health chatbot for youths. Human Technology, 16(2), Article 2. Holmstrom, A. J., Bodie, G. D., Burleson, B. R., McCullough, J. D., Rack, J. J., Hanasono, L. K., & Rosier, J. G. (2015). Testing a dual-process theory of supportive communication outcomes: How multiple factors influence outcomes in support situations. Communication Research, 42(4), 526–546. https://doi.org/10.1177/0093650213476293 House, J. S., Umberson, D., & work(s):, K. R. L. R. (1988). Structures and Processes of Social Support. Annual Review of Sociology, 1, 293–318. Hyojin Chin, Hyeonho Song, Gumhee Baek, Mingi Shin, Chani Jung, Meeyoung Cha, & Junghoi Choi. (2024). Journal of Medical Internet Research—The Potential of Chatbots for Emotional Support and Promoting Mental Well-Being in Different Cultures: Mixed Methods Study. https://www.jmir.org/2023/1/e51712 Iryna Pentina, Tianling Xie, Tyler Hancock, & Ainsworth Anthony Bailey. (2023). Consumer-machine relationships in the age of artificial intelligence: Systematic literature review and research directions. https://www.researchgate.net/publication/371229071_Consumer-machine_relationships_in_the_age_of_artificial_intelligence_Systematic_literature_review_and_research_directions Kathleen Kara Fitzpatrick, Alison Darcy, & Molly Vierhile. (2017). JMIR Mental Health—Delivering Cognitive Behavior Therapy to Young Adults With Symptoms of Depression and Anxiety Using a Fully Automated Conversational Agent (Woebot): A Randomized Controlled Trial. https://mental.jmir.org/2017/2/e19/ Kaye, L. K., Malone, S. A., & Wall, H. J. (2017). Emojis: Insights, Affordances, and Possibilities for Psychological Science. Trends in Cognitive Sciences, 21(2), 66–68. https://doi.org/10.1016/j.tics.2016.10.007 Kelly Ng. (2025). 「DeepSeek brought me to tears」: How young Chinese find therapy in AI. https://www.bbc.com/news/articles/cy7g45g2nxno Knapp, M. L., & Daly, J. A. (2002). Handbook of Interpersonal Communication. SAGE. Kramer, A. D. I., Guillory, J. E., & Hancock, J. T. (2014). Experimental evidence of massive-scale emotional contagion through social networks. Proceedings of the National Academy of Sciences, 111(24), 8788–8790. https://doi.org/10.1073/pnas.1320040111 Lai, T., Shi, Y., Du, Z., Wu, J., Fu, K., Dou, Y., & Wang, Z. (2024). Supporting the Demand on Mental Health Services with AI-Based Conversational Large Language Models (LLMs). BioMedInformatics, 4(1), Article 1. https://doi.org/10.3390/biomedinformatics4010002 Lan, A., Lee, A., Munroe, K., McRae, C., Kaleis, L., Keshavjee, K., & Guergachi, A. (2018). Review of cognitive behavioural therapy mobile apps using a reference architecture embedded in the patient-provider relationship. Biomedical Engineering Online, 17(1), 183. https://doi.org/10.1186/s12938-018-0611-4 Lee, J. D., & See, K. A. (2004). Trust in Automation: Designing for Appropriate Reliance. Human Factors, 46(1), 50–80. https://doi.org/10.1518/hfes.46.1.50_30392 Leslie S. Greenberg, Laura N. Rice, & Robert Elliott. (1996). Facilitating Emotional Change: The Moment-by-Moment Process. Guilford Press. https://www.guilford.com/books/Facilitating-Emotional-Change/Greenberg-Rice-Elliott/9781572302013?srsltid=AfmBOopnflKvY1QtX2HhZ5GHGM8dzAapZaxaa0EM_3korBI6mm6n1Pyt Li, C., Wang, J., Zhang, Y., Zhu, K., Hou, W., Lian, J., Luo, F., Yang, Q., & Xie, X. (2023). Large Language Models Understand and Can be Enhanced by Emotional Stimuli (No. arXiv:2307.11760). arXiv. https://doi.org/10.48550/arXiv.2307.11760 Li, X. (Shirley), Chan, K. W., & Kim, S. (2019). Service with Emoticons: How Customers Interpret Employee Use of Emoticons in Online Service Encounters. Journal of Consumer Research, 45(5), 973–987. https://doi.org/10.1093/jcr/ucy016 Lialin, V., Deshpande, V., Yao, X., & Rumshisky, A. (2024). Scaling Down to Scale Up: A Guide to Parameter-Efficient Fine-Tuning (No. arXiv:2303.15647). arXiv. https://doi.org/10.48550/arXiv.2303.15647 Lui, J. H. L., Marcus, D. K., & Barry, C. T. (2017). Evidence-based apps? A review of mental health mobile applications in a psychotherapy context. Professional Psychology: Research and Practice, 48(3), 199–210. https://doi.org/10.1037/pro0000122 M D Romael Haque & Sabirat Rubya. (2023). JMIR mHealth and uHealth—An Overview of Chatbot-Based Mobile Mental Health Apps: Insights From App Description and User Reviews. https://mhealth.jmir.org/2023/1/e44838 Mercer, S. W., Maxwell, M., Heaney, D., & Watt, G. C. (2004). The consultation and relational empathy (CARE) measure: Development and preliminary validation and reliability of an empathy-based consultation process measure. Family Practice, 21(6), 699–705. https://doi.org/10.1093/fampra/cmh621 Messina, I., Calvo, V., Masaro, C., Ghedin, S., & Marogna, C. (2021). Interpersonal Emotion Regulation: From Research to Group Therapy. Frontiers in Psychology, 12. https://doi.org/10.3389/fpsyg.2021.636919 Mezirow, J. (1991). Transformative Dimensions of Adult Learning. Jossey-Bass, 350 Sansome Street, San Francisco, CA 94104-1310 ($27. Michal Kosinski. (2024). Evaluating large language models in theory of mind tasks | PNAS. https://www.pnas.org/doi/10.1073/pnas.2405460121 Moerk, E. L. (1974). Age and epogenic influences on aspirations of minority and majority group children. Journal of Counseling Psychology, 21(4), 294–298. https://doi.org/10.1037/h0036640 Moon, J. (n.d.). AI chats feel “emotionally meaningful,” say about 40% of young South Koreans in survey—The Korea Herald. Retrieved April 3, 2025, from https://www.koreaherald.com/article/10429545 Murphy, R. (2021, 九月 21). The Importance of Empathic Listening for Making Meaning of Distress. Mad In America. https://www.madinamerica.com/2021/09/empathic-listening-meaning/ Nass, C., & Moon, Y. (2000). Machines and Mindlessness: Social Responses to Computers. Journal of Social Issues, 56(1), 81–103. https://doi.org/10.1111/0022-4537.00153 Norman, D. A. (2013). The design of everyday things (Revised and expanded edition). Basic Books. Open AI. (2024a, 三月 13). Introducing ChatGPT. https://openai.com/index/chatgpt/ Open AI. (2024b, 三月 13). Introducing GPTs. https://openai.com/index/introducing-gpts/ Prensky, M. (2001). Digital Natives, Digital Immigrants Part 1. On the Horizon, 9(5), 1–6. https://doi.org/10.1108/10748120110424816 Roy, R., & Naidoo, V. (2021). Enhancing chatbot effectiveness: The role of anthropomorphic conversational styles and time orientation. Journal of Business Research, 126, 23–34. https://doi.org/10.1016/j.jbusres.2020.12.051 Ruth Williams, Sarah Hopkins, Chris Frampton, Chester Holt-Quick, Sally Nicola Merry, & Karolina Stasiak. (2021). 21-Day Stress Detox: Open Trial of a Universal Well-Being Chatbot for Young Adults. https://www.mdpi.com/2076-0760/10/11/416 Ryan, R. M., & Deci, E. L. (2000). Self-determination theory and the facilitation of intrinsic motivation, social development, and well-being. The American Psychologist, 55(1), 68–78. https://doi.org/10.1037//0003-066x.55.1.68 Schaaff, K., Reinig, C., & Schlippe, T. (2023). Exploring ChatGPT’s Empathic Abilities (No. arXiv:2308.03527). arXiv. https://doi.org/10.48550/arXiv.2308.03527 Sharma, A., Lin, I. W., Miner, A. S., Atkins, D. C., & Althoff, T. (2022). Human-AI Collaboration Enables More Empathic Conversations in Text-based Peer-to-Peer Mental Health Support (No. arXiv:2203.15144). arXiv. https://doi.org/10.48550/arXiv.2203.15144 Skjuve, M., Følstad, A., Fostervold, K. I., & Brandtzaeg, P. B. (2022). A longitudinal study of human–chatbot relationships. International Journal of Human-Computer Studies, 168, 102903. https://doi.org/10.1016/j.ijhcs.2022.102903 Soma, C. S., Knox, D., Greer, T., Gunnerson, K., Young, A., & Narayanan, S. (2023). It’s not what you said, it’s how you said it: An analysis of therapist vocal features during psychotherapy. Counselling and Psychotherapy Research, 23(1), 258–269. https://doi.org/10.1002/capr.12489 Strauss, A., & Corbin, J. (1998). Basics of qualitative research techniques. Sun, H., Lin, Z., Zheng, C., Liu, S., & Huang, M. (2021). PsyQA: A Chinese Dataset for Generating Long Counseling Text for Mental Health Support (No. arXiv:2106.01702). arXiv. https://doi.org/10.48550/arXiv.2106.01702 Tao Zhou & Chunlei Zhang. (2024). Examining generative AI user addiction from a C-A-C perspective—ScienceDirect. https://www.sciencedirect.com/science/article/abs/pii/S0160791X2400201X?via%3Dihub Thomas, P., Czerwinski, M., McDuff, D., Craswell, N., & Mark, G. (2018). Style and Alignment in Information-Seeking Conversation. Proceedings of the 2018 Conference on Human Information Interaction & Retrieval, 42–51. https://doi.org/10.1145/3176349.3176388 Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., Rozière, B., Goyal, N., Hambro, E., Azhar, F., Rodriguez, A., Joulin, A., Grave, E., & Lample, G. (2023). LLaMA: Open and Efficient Foundation Language Models (No. arXiv:2302.13971). arXiv. https://doi.org/10.48550/arXiv.2302.13971 Ukpe, E. (2023). Information and Communication Technologies (ICTS) for E-Learning in Tertiary Education. Open Journal of Social Sciences, 11(12), 666–680. https://doi.org/10.4236/jss.2023.1112044 Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł. ukasz, & Polosukhin, I. (2017). Attention is All you Need. Advances in Neural Information Processing Systems, 30. https://papers.nips.cc/paper_files/paper/2017/hash/3f5ee243547dee91fbd053c1c4a845aa-Abstract.html Vollstedt, M., & Rezat, S. (2019). An introduction to grounded theory with a special focus on axial coding and the coding paradigm. In G. Kaiser & N. Presmeg (Eds.), Compendium for early career researchers in mathematics education (pp. 81–100). Springer International Publishing. https://doi.org/10.1007/978-3-030-15636-7_4 Vollstedt, M., & Rezat, S. (2019). An introduction to grounded theory with a special focus on axial coding and the coding paradigm. In G. Kaiser & N. Presmeg (Eds.), Compendium for early career researchers in mathematics education (pp. 81–100). Springer International Publishing. https://doi.org/10.1007/978-3-030-15636-7_4Wei, J., Wang, X., Schuurmans, D., Bosma, M., Ichter, B., Xia, F., Chi, E., Le, Q., & Zhou, D. (2023). Chain-of-Thought Prompting Elicits Reasoning in Large Language Models (No. arXiv:2201.11903). arXiv. https://doi.org/10.48550/arXiv.2201.11903 Weizenbaum, J. (1966). ELIZA—a computer program for the study of natural language communication between man and machine. Commun. ACM, 9(1), 36–45. https://doi.org/10.1145/365153.365168 Wilson, T. (2025). A Psycho-Spiritual Journey. Mad In America. https://www.madinamerica.com/2025/01/a-psycho-spiritual-journey/ Wu, J., Gan, W., Chen, Z., Wan, S., & Yu, P. S. (2023). Multimodal Large Language Models: A Survey (No. arXiv:2311.13165). arXiv. https://doi.org/10.48550/arXiv.2311.13165 Xu, Y., Zhang, J., & Deng, G. (n.d.). Enhancing customer satisfaction with chatbots: The influence of communication styles and consumer attachment anxiety. Retrieved April 3, 2025, from https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2022.902782/full Yin, Y., Jia, N., & Wakslak, C. J. (2024). AI can help people feel heard, but an AI label diminishes this impact. Proceedings of the National Academy of Sciences, 121(14), e2319112121. https://doi.org/10.1073/pnas.2319112121 Zengzhi Wang, Qiming Xie, Yi Feng, Zinong Yang, Rui Xia, & Zixiang Ding. (2023). Is ChatGPT a Good Sentiment Analyzer? A Preliminary Study. https://arxiv.org/abs/2304.04339 Zhao, W. X., Zhou, K., Li, J., Tang, T., Wang, X., Hou, Y., Min, Y., Zhang, B., Zhang, J., Dong, Z., Du, Y., Yang, C., Chen, Y., Chen, Z., Jiang, J., Ren, R., Li, Y., Tang, X., Liu, Z., … Wen, J.-R. (2025). A Survey of Large Language Models (No. arXiv:2303.18223). arXiv. https://doi.org/10.48550/arXiv.2303.18223 描述 碩士
國立政治大學
數位內容碩士學位學程
108462008資料來源 http://thesis.lib.nccu.edu.tw/record/#G0108462008 資料類型 thesis dc.contributor.advisor 陳宜秀<br>廖峻鋒 zh_TW dc.contributor.advisor YiSiu Chen<br>Chun-Feng Liao en_US dc.contributor.author (Authors) 陳韋蓉 zh_TW dc.contributor.author (Authors) Chen Wei Rong en_US dc.creator (作者) 陳韋蓉 zh_TW dc.creator (作者) Rong, Chen Wei en_US dc.date (日期) 2025 en_US dc.date.accessioned 1-Sep-2025 16:50:22 (UTC+8) - dc.date.available 1-Sep-2025 16:50:22 (UTC+8) - dc.date.issued (上傳時間) 1-Sep-2025 16:50:22 (UTC+8) - dc.identifier (Other Identifiers) G0108462008 en_US dc.identifier.uri (URI) https://nccur.lib.nccu.edu.tw/handle/140.119/159386 - dc.description (描述) 碩士 zh_TW dc.description (描述) 國立政治大學 zh_TW dc.description (描述) 數位內容碩士學位學程 zh_TW dc.description (描述) 108462008 zh_TW dc.description.abstract (摘要) 本文旨在探討大型語言ế型(Large Language Model, LLM)為核心技術的聊天ỽ器人,在情緒支持與心理陪伴層面是否能展現近似輔導諮商時的支持效果。 隨著生成式人工智慧(artificial intelligence,AI)快速發展,具備自然語言處理能力的聊天ỽ器人日益被應用於心理健康領域,但其是否真能提供被理解、情感撫慰與信任建立的支持性互動,仍需深入驗證。本研究以支持性溝通理論(Supportive Communication Theory)為基礎,聚焦於聊天ỽ器人透過提示詞工程(Prompt engineering)ế擬情感支持、評價支持與資訊支持三種回應風格,是否能有效傳遞同理與關懷,進而提供情緒支持效能。本研究先進行了前導研究,邀請受過專ḋ訓練的諮商人員對聊天ỽ器人的支持回應進行評估及改進。正式研究則採用日記研究法(diary study),邀請參與者與經過前導研究驗證過的聊天ỽ器人連續互動十日,並於不同階⁓進行問卷與訪談,收集使用者感受與互動品質資料。研究結果以信任自動化理論(Trust in AutomationModel)與同理心量表架ṩ(ECSS 與 CARE)進行分析,從信任建立、回應適配、情緒感知與情感連結等面向進行評估,探究聊天ỽ器人是否能如擬人化輔助者般,承接使用者的情緒經驗與心理壓力。 研究結果透過紮根理論建ṩ出「信任建立」、「節奏調節」、「觀點轉化」三階⁓互動―程,提出「生成式 AI 心理支持互動―程ế型」,用以解釋人與生成式 AI 在情緒支持上的關係建ṩ過程。研究亦發現,同理心來自於系統記憶對話內容、主動提起過往經驗等連結,單一情緒ặ記更能引發被理解的感受,因 多數參與者仍認為 AI 難以取代真人的深層共感、經驗整合與價值理解,且部分人更多期待更具挑戰性與實用性的對話,並視之為支持一環,而非僅尋⃠陪伴與安慰性互動。本研究為 AI 諮商介面提供理論基礎與實務建議,指出未來應強化 AI 的支持性對話的設計面向。 zh_TW dc.description.tableofcontents 第一章 研究背景與動機 12 第一節 多元心理健康支持方式 12 第二節 近年台灣年輕人心理健康狀態不佳 13 第三節 人工智慧成為心理支持的一環 14 第四節 研究動機與目的 16 1.4.1研究目的 16 1.4.2探究問題 17 第二章 文獻回顧 18 第一節 何謂 LLM 與其發展路徑 18 第二節 人機之間的情緒共振關係 24 第三節 對話式聊天機器人的情緒支持效能 27 第四節 人類情感互動與溝通中的支持策略 28 第五節 人類互動關係中諮商在人機互動中的運用 30 第六節 支持性溝通理論在人際互動中的應用 32 第三章 研究方法 36 第一節 方法概論 36 3.1.1 研究測量工具 39 3.1.1.1 日記研究後研究的訪談設計 40 3.1.1.2 互動過程中的「聊天機器人問卷調查」設計 42 3.1.1.3 聊天機器人同理心評估問卷 44 第二節 研究使用之聊天機器人設計 46 3.2.1 提示詞設計理念與理論依據 46 3.2.2 系統提示詞模板設計 49 3.2.3 聊天機器人環境建置 51 3.2.4 聊天內容設計與倫理防護設計 55 3.2.5 實驗進行中的倫理與安全議題 57 3.2.5.1 情緒波動處理與支持機制 58 3.2.5.2 高風險語句偵測與即時通報處理 58 第三節 前導研究 59 3.3.1 前導研究進行流程 59 3.3.2 前導合作之專業人士背景 61 3.3.3 前導研究結果 63 3.3.3.1 支持風格文本標記結果 63 3.3.3.2 同理心感知量表結果 65 3.3.3.3 同理心量表的基礎描述性統計 67 3.3.3.4 分情境支持類型分析 68 3.3.4 聊天機器人同理心問卷分析結果 71 3.3.5 前導實驗訪談結果 72 3.3.6 前導實驗評估結果總結 75 3.3.7 提示詞具體改進策略 76 第三節 正式研究的招募與篩選 77 第四節 日記研究進行方式 81 第五節 正式研究分析與編碼方式 82 第四章 研究結果 83 第一節 分析與編碼方式 84 4.1.1 訪談資料作為理論建構基礎 84 4.1.2 資料蒐集與整理之原則 85 4.1.3 受訪樣本互動傾向分類參考 87 4.1.4 開放性編碼設計 89 第二節 軸心編碼 90 4.2.1 子題 A1:語氣風格與情感溫度的感知經驗 94 4.2.2 子題 A2:支持關係建立與信任 95 4.2.3 子題 A3:同理品質與情緒理解 96 4.2.4 子題 B1:情緒調節與互動節奏整合 97 4.2.5 子題 B2:問題導向與實用功能期待 98 4.2.6 子題 C1:對話參與與主導權期待 100 4.2.7 子題 C2:認知視角拓展與想法重構 101 4.2.8 質性資料的階段性整理與理論結構收斂 102 1. 語氣風格與情緒溫度:影響使用者初步信任與表達動機。 103 2. 支持關係與安全感建構:左右使用者是否持續敞開。 103 3. 同理品質與語意理解:為形成被理解感的關鍵。 103 4. 互動節奏與介入時機:牽涉支持的有效性與接受度。 103 5. 問題導向與實用建議:連結情緒支持與行動效能。 103 6. 認知視角的擴展潛力:帶動使用者反思與自我調節。 103 這些機制皆圍繞著「支持性互動經驗」為核心,彼此之間在真實對話歷程中動態交錯,構成使用者對 AI 系統的信任與滿意度基礎。 103 第三節 日記內容分析與編碼結果驗證 105 4.3.1 呼應性面向一:主題內容的對應 106 4.3.2 呼應性面向二:心理歷程的連續性 108 4.3.3 呼應性面向三:研究問題的回應 109 4.3.4 日記研究呼應性總結 111 第四節 量化數據整合與檢視 111 4.4.1 三階段問卷題目間對應性 112 4.4.2 跨時段比較之結果 115 4.4.3 各項比較面向跨時段分析 116 4.4.4 問卷面向對應三大主題與七項子題 126 4.4.5 問卷數據與三階段歷程交叉對應 132 第五節 聊天機器人同理心感知量表 133 4.5.1 問卷設計、樣本與施測說明 133 4.5.2 聊天機器人同理心感知量表數據分析結果 134 4.5.3 問卷數據與三階段歷程交叉對應 135 第六節 選擇性編碼與核心範疇 137 第七節 理論建構 139 4.7.1 核心範疇整合與模型建構 139 4.7.2 第一階段:信任建立與語氣認知 141 4.7.3 第二階段:節奏調節與功能應對 141 4.7.4 第三階段:認知參與和自我重構 142 4.7.5 歷程邏輯整合與模型檢驗 143 第五章 討論 144 5.1 研究主要結論 144 5.2 研究者洞察 146 5.3 實務應用建議 147 5.4 研究限制與未來方向 149 5.5 總結與回顧 150 參考文獻 152 中文參考資料 152 英文參考資料 153 附件一 衛福部國民健康署的壓力指數測量表 165 附件二 系統提示詞設計 167 附件三 前導研究測試樣本與正式系統對話參考模板 172 附件四 受試過程中各個階段問卷 211 附件五 聊天機器人同理心評估問卷 222 zh_TW dc.format.extent 4303245 bytes - dc.format.mimetype application/pdf - dc.source.uri (資料來源) http://thesis.lib.nccu.edu.tw/record/#G0108462008 en_US dc.subject (關鍵詞) 大型語言模型 zh_TW dc.subject (關鍵詞) 心理健康支持 zh_TW dc.subject (關鍵詞) 人工智慧 zh_TW dc.subject (關鍵詞) 支持性溝通理論 zh_TW dc.subject (關鍵詞) 信任自動化理論 zh_TW dc.subject (關鍵詞) 聊天機器人 zh_TW dc.subject (關鍵詞) 情感支持 zh_TW dc.subject (關鍵詞) 使用者體驗 zh_TW dc.subject (關鍵詞) 回應風格 zh_TW dc.subject (關鍵詞) Large Language Model en_US dc.subject (關鍵詞) LLM en_US dc.subject (關鍵詞) Mental Health Support en_US dc.subject (關鍵詞) Artificial Intelligence en_US dc.subject (關鍵詞) AI en_US dc.subject (關鍵詞) Supportive Communication Theory en_US dc.subject (關鍵詞) Trust in Automation en_US dc.subject (關鍵詞) Chatbot en_US dc.subject (關鍵詞) User Experience en_US dc.subject (關鍵詞) Response Style en_US dc.title (題名) 人機互動中的陪伴:LLM 聊天機器人在心理支持上的歷程分析 zh_TW dc.title (題名) Companionship in Human–Computer Interaction: A Process Analysis of Psychological Support by LLM Chatbots en_US dc.type (資料類型) thesis en_US dc.relation.reference (參考文獻) 中文參考資料 謝麗紅、陳亭妍、張瑋珊、陳雪均(2024)。導入探究與實作精神的人工智慧及其應用課程效果研究。教育心理學報,56(1),1-24。https://doi.org/10.6251/BEP.202409_56(1).0001 吳宗儒(2018)。諮商心理師運用同理之方式研究。﹝碩士論文。國立嘉義大學﹞臺灣博碩士論文知識加值系統。 https://hdl.handle.net/11296/49g885。 沈奕辰. (2025). 結合GPT模型與VITS於心理諮商輔導之應用. 淡江大學電機工程學系人工智慧機器人碩士班學位論文, 1–93. https://doi.org/10.6846/tku202400764 社團法人臺灣憂鬱症防治協會. (2025). 各年齡層憂鬱症的求助阻礙—協會通訊—社團法人臺灣憂鬱症防治協會. https://www.depression.org.tw/communication/info.asp?/167.html 胡幼慧. (2008). 質性研究—理論、方法及本土女性研究實例. https://www.books.com.tw/products/0010406897 財團法人台灣網路資訊中心. (2024). 2024 台灣網路報告. https://report.twnic.tw/2024/index.html 財團法人「張老師」基金會. (2024). 財團法人張老師基金會. https://www.1980.org.tw/news_show.php?news_id=579 陳向明. (2024). 社會科學質的研究. 五南官網. https://www.wunan.com.tw/bookdetail?NO=3448 陳德倫. (2023). 3次免費諮商,然後呢?推動「求助常態化」,擴大年輕世代心理健康支持網的新挑戰—報導者 The Reporter. https://www.twreporter.org/a/free-counceling-for-young-people-program 統計處. (2021, 十月 26). 世界心理健康日衛生福利統計通報(統計處) [文字]. 統計處; 統計處. https://dep.mohw.gov.tw/dos/cp-5112-63761-113.html 黃惠惠. (2005). 助人歷程與技巧(新增訂版). 博客來. https://www.books.com.tw/products/0010308213 葉寶玲, 郭文正, & 蔡佳容. (2024). 諮商系所學生使用聊天機器人經驗初探. 教育心理學報, 56(1), 45–72. https://doi.org/10.6251/BEP.202409_56(1).0003 廖本富. (2000). 同理心與焦點解決短期諮商. https://tpl.ncl.edu.tw/NclService/JournalContentDetail?SysId=A00003770 衛福部. (2020a). 心理師執行通訊心理諮商業務核准作業參考原則. https://www.twtcpa.org.tw/sites/default/files/field_files/news/%E5%BF%83%E7%90%86%E5%B8%AB%E5%9F%B7%E8%A1%8C%E9%80%9A%E8%A8%8A%E5%BF%83%E7%90%86%E8%AB%AE%E5%95%86%E6%A5%AD%E5%8B%99%E6%A0%B8%E5%87%86%E4%BD%9C%E6%A5%AD%E5%8F%83%E8%80%83%E5%8E%9F%E5%89%87%281090729%E4%BF%AE%E6%AD%A3%29.pdf 衛福部. (2020b). 壓力指數測量表│健康九九+網站. 健康九九+網站. https://health99.hpa.gov.tw/onlineQuiz/pressure 謝麗紅, 陳亭妍, 張瑋珊, & 陳雪均. (2024). 導入探究與實作精神的人工智慧及其應用課程效果研究. 教育心理學報, 56(1), 1–24. https://doi.org/10.6251/BEP.202409_56(1).0001 英文參考資料 Ackerman, S. J., & Hilsenroth, M. J. (2003). A review of therapist characteristics and techniques positively impacting the therapeutic alliance. Clinical Psychology Review, 23(1), 1–33. https://doi.org/10.1016/S0272-7358(02)00146-0 Altman, I., & Taylor, D. A. (1973). Social penetration: The development of interpersonal relationships (頁 viii, 212). Holt, Rinehart & Winston. ANTHROPIC. (2024). Introducing the Model Context Protocol. https://www.anthropic.com/news/model-context-protocol Asia Grace. (2024, 十月 9). Gen Zs, millennials are using AI for emotional support, calling it ‘more effective’ than a pet: Study. Yahoo Life. https://www.yahoo.com/lifestyle/gen-zs-millennials-using-ai-141641571.html Barrett-Lennard, G. T. (1981). The empathy cycle: Refinement of a nuclear concept. Journal of Counseling Psychology, 28(2), 91–100. https://doi.org/10.1037/0022-0167.28.2.91 Becky Inkster, Shubhankar Sarda, & Vinod Subramanian. (2018). JMIR mHealth and uHealth—An Empathy-Driven, Conversational Artificial Intelligence Agent (Wysa) for Digital Mental Well-Being: Real-World Data Evaluation Mixed-Methods Study. https://mhealth.jmir.org/2018/11/e12106/ Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., Agarwal, S., Herbert-Voss, A., Krueger, G., Henighan, T., Child, R., Ramesh, A., Ziegler, D., Wu, J., Winter, C., … Amodei, D. (2020). Language Models are Few-Shot Learners. Advances in Neural Information Processing Systems, 33, 1877–1901. https://proceedings.neurips.cc/paper/2020/hash/1457c0d6bfcb4967418bfb8ac142f64a-Abstract.html Burleson, B. R. (2003). The experience and effects of emotional support: What the study of cultural and gender differences can tell us about close relationships, emotion, and interpersonal communication. Personal Relationships, 10(1), 1–23. https://doi.org/10.1111/1475-6811.00033 Bylund, C. L., & Makoul, G. (2002). Empathic communication and gender in the physician–patient encounter. Patient Education and Counseling, 48(3), 207–216. https://doi.org/10.1016/S0738-3991(02)00173-8 Byron Reeves & Clifford Nass. (1996). The Media Equation: How People Treat Computers, Television, and New Media Like Real People and Pla. https://www.researchgate.net/publication/37705092_The_Media_Equation_How_People_Treat_Computers_Television_and_New_Media_Like_Real_People_and_Pla Cathy Mengying Fang, Auren R. Liu, Valdemar Danry, & Sandhini Agarwal. (2025). How AI and Human Behaviors Shape Psychosocial Effects of Chatbot Use: A Longitudinal Randomized Controlled Study. ResearchGate. https://www.researchgate.net/publication/390143219_How_AI_and_Human_Behaviors_Shape_Psychosocial_Effects_of_Chatbot_Use_A_Longitudinal_Randomized_Controlled_Study Chang, Y.-H., Lin, C.-Y., Liao, S.-C., Chen, Y.-Y., Shaw, F. F.-T., Hsu, C.-Y., Gunnell, D., & Chang, S.-S. (2023). Societal factors and psychological distress indicators associated with the recent rise in youth suicide in Taiwan: A time trend analysis. The Australian and New Zealand Journal of Psychiatry, 57(4), 537–549. https://doi.org/10.1177/00048674221108640 Chatgptsmodel.com. (n.d.). ChatGPT AI girlfriend. Retrieved April 3, 2025, from https://chatgpt.com Cohen, S., & Wills, T. A. (1985). Stress, social support, and the buffering hypothesis. Psychological Bulletin, 98(2), 310–357. https://doi.org/10.1037/0033-2909.98.2.310 Cristen Torrey, Susan R. Fussell, & Sara Kiesler. (2013). How a robot should give advice | IEEE Conference Publication | IEEE Xplore. https://ieeexplore.ieee.org/document/6483599 Dan Jurafsky & James H. Martin. (2019). Speech and Language Processing. https://web.stanford.edu/~jurafsky/slp3/ David Bakker, Nikolaos Kazantzis, Debra Rickwood, & Nikki Rickard. (2016). JMIR Mental Health—Mental Health Smartphone Apps: Review and Evidence-Based Recommendations for Future Developments. https://mental.jmir.org/2016/1/e7/ Derks, D., Fischer, A. H., & Bos, A. E. R. (2008). The role of emotion in computer-mediated communication: A review. Computers in Human Behavior, 24(3), 766–785. https://doi.org/10.1016/j.chb.2007.04.004 Duncan Cramer. (2001). Facilitativeness, conflict, demand for approval, self-esteem, and satisfaction with romantic relationships. https://psycnet.apa.org/record/2003-02058-010 Elliott, R., Bohart, A. C., Watson, J. C., & Greenberg, L. S. (2011). Empathy. Psychotherapy, 48(1), 43–49. https://doi.org/10.1037/a0022187 Elliott Robert, Watson, Jeanne C, Bohart Arthur C, & Murphy, David. (2018). Therapist empathy and client outcome: An updated meta-analysis. https://nottingham-repository.worktribe.com/output/921082/therapist-empathy-and-client-outcome-an-updated-meta-analysis?utm_source=chatgpt.com Felipe Thomaz, Carolina Salge, Elena Karahanna, & John Hulland. (2020). Learning from the Dark Web: Leveraging conversational agents in the era of hyper-privacy to enhance marketing | Journal of the Academy of Marketing Science. https://link.springer.com/article/10.1007/s11747-019-00704-3 Freitas, J. D., Castelo, N., Uguralp, A., & Uguralp, Z. (2024). Lessons From an App Update at Replika AI: Identity Discontinuity in Human-AI Relationships (No. arXiv:2412.14190). arXiv. https://doi.org/10.48550/arXiv.2412.14190 Fu, T. S.-T., Lee, C.-S., Gunnell, D., Lee, W.-C., & Cheng, A. T.-A. (2013). Changing trends in the prevalence of common mental disorders in Taiwan: A 20-year repeated cross-sectional survey. The Lancet, 381(9862), 235–241. https://doi.org/10.1016/S0140-6736(12)61264-1 Ghandeharioun, A., McDuff, D., Czerwinski, M., & Rowan, K. (2019). Towards Understanding Emotional Intelligence for Behavior Change Chatbots. 2019 8th International Conference on Affective Computing and Intelligent Interaction (ACII), 8–14. https://doi.org/10.1109/ACII.2019.8925433 Glaser, B., & Strauss, A. (2017). Discovery of Grounded Theory: Strategies for Qualitative Research. Routledge. https://doi.org/10.4324/9780203793206 Goldsmith, D. J. (2004). Communicating social support (頁 x, 207). Cambridge University Press. https://doi.org/10.1017/CBO9780511606984 Grant Packard & Jonah Berger. (2020). How Concrete Language Shapes Customer Satisfaction | Journal of Consumer Research | Oxford Academic. https://academic.oup.com/jcr/article/47/5/787/5873524?login=false Gross, J. J. (1998). Antecedent- and response-focused emotion regulation: Divergent consequences for experience, expression, and physiology. Journal of Personality and Social Psychology, 74(1), 224–237. https://doi.org/10.1037//0022-3514.74.1.224 Haque, M. D. R., & Rubya, S. (2023). An Overview of Chatbot-Based Mobile Mental Health Apps: Insights From App Description and User Reviews. JMIR mHealth and uHealth, 11(1), e44838. https://doi.org/10.2196/44838 Hatch, S. G., Goodman, Z. T., Vowels, L., Hatch, H. D., Brown, A. L., Guttman, S., Le, Y., Bailey, B., Bailey, R. J., Esplin, C. R., Harris, S. M., Jr, D. P. H., McLaughlin, M., O’Connell, P., Rothman, K., Ritchie, L., Jr, D. N. T., & Braithwaite, S. R. (2025). When ELIZA meets therapists: A Turing test for the heart and mind. PLOS Mental Health, 2(2), e0000145. https://doi.org/10.1371/journal.pmen.0000145 Herrando, C., & Constantinides, E. (n.d.). Emotional contagion: A brief overview and future directions. Frontiers in Psychology. Retrieved April 3, 2025, from https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2021.712606/full Høiland, C. G., Følstad, A., & Karahasanovic, A. (2020). Hi, can I help? Exploring how to design a mental health chatbot for youths. Human Technology, 16(2), Article 2. Holmstrom, A. J., Bodie, G. D., Burleson, B. R., McCullough, J. D., Rack, J. J., Hanasono, L. K., & Rosier, J. G. (2015). Testing a dual-process theory of supportive communication outcomes: How multiple factors influence outcomes in support situations. Communication Research, 42(4), 526–546. https://doi.org/10.1177/0093650213476293 House, J. S., Umberson, D., & work(s):, K. R. L. R. (1988). Structures and Processes of Social Support. Annual Review of Sociology, 1, 293–318. Hyojin Chin, Hyeonho Song, Gumhee Baek, Mingi Shin, Chani Jung, Meeyoung Cha, & Junghoi Choi. (2024). Journal of Medical Internet Research—The Potential of Chatbots for Emotional Support and Promoting Mental Well-Being in Different Cultures: Mixed Methods Study. https://www.jmir.org/2023/1/e51712 Iryna Pentina, Tianling Xie, Tyler Hancock, & Ainsworth Anthony Bailey. (2023). Consumer-machine relationships in the age of artificial intelligence: Systematic literature review and research directions. https://www.researchgate.net/publication/371229071_Consumer-machine_relationships_in_the_age_of_artificial_intelligence_Systematic_literature_review_and_research_directions Kathleen Kara Fitzpatrick, Alison Darcy, & Molly Vierhile. (2017). JMIR Mental Health—Delivering Cognitive Behavior Therapy to Young Adults With Symptoms of Depression and Anxiety Using a Fully Automated Conversational Agent (Woebot): A Randomized Controlled Trial. https://mental.jmir.org/2017/2/e19/ Kaye, L. K., Malone, S. A., & Wall, H. J. (2017). Emojis: Insights, Affordances, and Possibilities for Psychological Science. Trends in Cognitive Sciences, 21(2), 66–68. https://doi.org/10.1016/j.tics.2016.10.007 Kelly Ng. (2025). 「DeepSeek brought me to tears」: How young Chinese find therapy in AI. https://www.bbc.com/news/articles/cy7g45g2nxno Knapp, M. L., & Daly, J. A. (2002). Handbook of Interpersonal Communication. SAGE. Kramer, A. D. I., Guillory, J. E., & Hancock, J. T. (2014). Experimental evidence of massive-scale emotional contagion through social networks. Proceedings of the National Academy of Sciences, 111(24), 8788–8790. https://doi.org/10.1073/pnas.1320040111 Lai, T., Shi, Y., Du, Z., Wu, J., Fu, K., Dou, Y., & Wang, Z. (2024). Supporting the Demand on Mental Health Services with AI-Based Conversational Large Language Models (LLMs). BioMedInformatics, 4(1), Article 1. https://doi.org/10.3390/biomedinformatics4010002 Lan, A., Lee, A., Munroe, K., McRae, C., Kaleis, L., Keshavjee, K., & Guergachi, A. (2018). Review of cognitive behavioural therapy mobile apps using a reference architecture embedded in the patient-provider relationship. Biomedical Engineering Online, 17(1), 183. https://doi.org/10.1186/s12938-018-0611-4 Lee, J. D., & See, K. A. (2004). Trust in Automation: Designing for Appropriate Reliance. Human Factors, 46(1), 50–80. https://doi.org/10.1518/hfes.46.1.50_30392 Leslie S. Greenberg, Laura N. Rice, & Robert Elliott. (1996). Facilitating Emotional Change: The Moment-by-Moment Process. Guilford Press. https://www.guilford.com/books/Facilitating-Emotional-Change/Greenberg-Rice-Elliott/9781572302013?srsltid=AfmBOopnflKvY1QtX2HhZ5GHGM8dzAapZaxaa0EM_3korBI6mm6n1Pyt Li, C., Wang, J., Zhang, Y., Zhu, K., Hou, W., Lian, J., Luo, F., Yang, Q., & Xie, X. (2023). Large Language Models Understand and Can be Enhanced by Emotional Stimuli (No. arXiv:2307.11760). arXiv. https://doi.org/10.48550/arXiv.2307.11760 Li, X. (Shirley), Chan, K. W., & Kim, S. (2019). Service with Emoticons: How Customers Interpret Employee Use of Emoticons in Online Service Encounters. Journal of Consumer Research, 45(5), 973–987. https://doi.org/10.1093/jcr/ucy016 Lialin, V., Deshpande, V., Yao, X., & Rumshisky, A. (2024). Scaling Down to Scale Up: A Guide to Parameter-Efficient Fine-Tuning (No. arXiv:2303.15647). arXiv. https://doi.org/10.48550/arXiv.2303.15647 Lui, J. H. L., Marcus, D. K., & Barry, C. T. (2017). Evidence-based apps? A review of mental health mobile applications in a psychotherapy context. Professional Psychology: Research and Practice, 48(3), 199–210. https://doi.org/10.1037/pro0000122 M D Romael Haque & Sabirat Rubya. (2023). JMIR mHealth and uHealth—An Overview of Chatbot-Based Mobile Mental Health Apps: Insights From App Description and User Reviews. https://mhealth.jmir.org/2023/1/e44838 Mercer, S. W., Maxwell, M., Heaney, D., & Watt, G. C. (2004). The consultation and relational empathy (CARE) measure: Development and preliminary validation and reliability of an empathy-based consultation process measure. Family Practice, 21(6), 699–705. https://doi.org/10.1093/fampra/cmh621 Messina, I., Calvo, V., Masaro, C., Ghedin, S., & Marogna, C. (2021). Interpersonal Emotion Regulation: From Research to Group Therapy. Frontiers in Psychology, 12. https://doi.org/10.3389/fpsyg.2021.636919 Mezirow, J. (1991). Transformative Dimensions of Adult Learning. Jossey-Bass, 350 Sansome Street, San Francisco, CA 94104-1310 ($27. Michal Kosinski. (2024). Evaluating large language models in theory of mind tasks | PNAS. https://www.pnas.org/doi/10.1073/pnas.2405460121 Moerk, E. L. (1974). Age and epogenic influences on aspirations of minority and majority group children. Journal of Counseling Psychology, 21(4), 294–298. https://doi.org/10.1037/h0036640 Moon, J. (n.d.). AI chats feel “emotionally meaningful,” say about 40% of young South Koreans in survey—The Korea Herald. Retrieved April 3, 2025, from https://www.koreaherald.com/article/10429545 Murphy, R. (2021, 九月 21). The Importance of Empathic Listening for Making Meaning of Distress. Mad In America. https://www.madinamerica.com/2021/09/empathic-listening-meaning/ Nass, C., & Moon, Y. (2000). Machines and Mindlessness: Social Responses to Computers. Journal of Social Issues, 56(1), 81–103. https://doi.org/10.1111/0022-4537.00153 Norman, D. A. (2013). The design of everyday things (Revised and expanded edition). Basic Books. Open AI. (2024a, 三月 13). Introducing ChatGPT. https://openai.com/index/chatgpt/ Open AI. (2024b, 三月 13). Introducing GPTs. https://openai.com/index/introducing-gpts/ Prensky, M. (2001). Digital Natives, Digital Immigrants Part 1. On the Horizon, 9(5), 1–6. https://doi.org/10.1108/10748120110424816 Roy, R., & Naidoo, V. (2021). Enhancing chatbot effectiveness: The role of anthropomorphic conversational styles and time orientation. Journal of Business Research, 126, 23–34. https://doi.org/10.1016/j.jbusres.2020.12.051 Ruth Williams, Sarah Hopkins, Chris Frampton, Chester Holt-Quick, Sally Nicola Merry, & Karolina Stasiak. (2021). 21-Day Stress Detox: Open Trial of a Universal Well-Being Chatbot for Young Adults. https://www.mdpi.com/2076-0760/10/11/416 Ryan, R. M., & Deci, E. L. (2000). Self-determination theory and the facilitation of intrinsic motivation, social development, and well-being. The American Psychologist, 55(1), 68–78. https://doi.org/10.1037//0003-066x.55.1.68 Schaaff, K., Reinig, C., & Schlippe, T. (2023). Exploring ChatGPT’s Empathic Abilities (No. arXiv:2308.03527). arXiv. https://doi.org/10.48550/arXiv.2308.03527 Sharma, A., Lin, I. W., Miner, A. S., Atkins, D. C., & Althoff, T. (2022). Human-AI Collaboration Enables More Empathic Conversations in Text-based Peer-to-Peer Mental Health Support (No. arXiv:2203.15144). arXiv. https://doi.org/10.48550/arXiv.2203.15144 Skjuve, M., Følstad, A., Fostervold, K. I., & Brandtzaeg, P. B. (2022). A longitudinal study of human–chatbot relationships. International Journal of Human-Computer Studies, 168, 102903. https://doi.org/10.1016/j.ijhcs.2022.102903 Soma, C. S., Knox, D., Greer, T., Gunnerson, K., Young, A., & Narayanan, S. (2023). It’s not what you said, it’s how you said it: An analysis of therapist vocal features during psychotherapy. Counselling and Psychotherapy Research, 23(1), 258–269. https://doi.org/10.1002/capr.12489 Strauss, A., & Corbin, J. (1998). Basics of qualitative research techniques. Sun, H., Lin, Z., Zheng, C., Liu, S., & Huang, M. (2021). PsyQA: A Chinese Dataset for Generating Long Counseling Text for Mental Health Support (No. arXiv:2106.01702). arXiv. https://doi.org/10.48550/arXiv.2106.01702 Tao Zhou & Chunlei Zhang. (2024). Examining generative AI user addiction from a C-A-C perspective—ScienceDirect. https://www.sciencedirect.com/science/article/abs/pii/S0160791X2400201X?via%3Dihub Thomas, P., Czerwinski, M., McDuff, D., Craswell, N., & Mark, G. (2018). Style and Alignment in Information-Seeking Conversation. Proceedings of the 2018 Conference on Human Information Interaction & Retrieval, 42–51. https://doi.org/10.1145/3176349.3176388 Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., Rozière, B., Goyal, N., Hambro, E., Azhar, F., Rodriguez, A., Joulin, A., Grave, E., & Lample, G. (2023). LLaMA: Open and Efficient Foundation Language Models (No. arXiv:2302.13971). arXiv. https://doi.org/10.48550/arXiv.2302.13971 Ukpe, E. (2023). Information and Communication Technologies (ICTS) for E-Learning in Tertiary Education. Open Journal of Social Sciences, 11(12), 666–680. https://doi.org/10.4236/jss.2023.1112044 Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł. ukasz, & Polosukhin, I. (2017). Attention is All you Need. Advances in Neural Information Processing Systems, 30. https://papers.nips.cc/paper_files/paper/2017/hash/3f5ee243547dee91fbd053c1c4a845aa-Abstract.html Vollstedt, M., & Rezat, S. (2019). An introduction to grounded theory with a special focus on axial coding and the coding paradigm. In G. Kaiser & N. Presmeg (Eds.), Compendium for early career researchers in mathematics education (pp. 81–100). Springer International Publishing. https://doi.org/10.1007/978-3-030-15636-7_4 Vollstedt, M., & Rezat, S. (2019). An introduction to grounded theory with a special focus on axial coding and the coding paradigm. In G. Kaiser & N. Presmeg (Eds.), Compendium for early career researchers in mathematics education (pp. 81–100). Springer International Publishing. https://doi.org/10.1007/978-3-030-15636-7_4Wei, J., Wang, X., Schuurmans, D., Bosma, M., Ichter, B., Xia, F., Chi, E., Le, Q., & Zhou, D. (2023). Chain-of-Thought Prompting Elicits Reasoning in Large Language Models (No. arXiv:2201.11903). arXiv. https://doi.org/10.48550/arXiv.2201.11903 Weizenbaum, J. (1966). ELIZA—a computer program for the study of natural language communication between man and machine. Commun. ACM, 9(1), 36–45. https://doi.org/10.1145/365153.365168 Wilson, T. (2025). A Psycho-Spiritual Journey. Mad In America. https://www.madinamerica.com/2025/01/a-psycho-spiritual-journey/ Wu, J., Gan, W., Chen, Z., Wan, S., & Yu, P. S. (2023). Multimodal Large Language Models: A Survey (No. arXiv:2311.13165). arXiv. https://doi.org/10.48550/arXiv.2311.13165 Xu, Y., Zhang, J., & Deng, G. (n.d.). Enhancing customer satisfaction with chatbots: The influence of communication styles and consumer attachment anxiety. Retrieved April 3, 2025, from https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2022.902782/full Yin, Y., Jia, N., & Wakslak, C. J. (2024). AI can help people feel heard, but an AI label diminishes this impact. Proceedings of the National Academy of Sciences, 121(14), e2319112121. https://doi.org/10.1073/pnas.2319112121 Zengzhi Wang, Qiming Xie, Yi Feng, Zinong Yang, Rui Xia, & Zixiang Ding. (2023). Is ChatGPT a Good Sentiment Analyzer? A Preliminary Study. https://arxiv.org/abs/2304.04339 Zhao, W. X., Zhou, K., Li, J., Tang, T., Wang, X., Hou, Y., Min, Y., Zhang, B., Zhang, J., Dong, Z., Du, Y., Yang, C., Chen, Y., Chen, Z., Jiang, J., Ren, R., Li, Y., Tang, X., Liu, Z., … Wen, J.-R. (2025). A Survey of Large Language Models (No. arXiv:2303.18223). arXiv. https://doi.org/10.48550/arXiv.2303.18223 zh_TW
