學術產出-Theses

Article View/Open

Publication Export

Google ScholarTM

政大圖書館

Citation Infomation

  • No doi shows Citation Infomation
題名 安心專線對話過程中語句累積對自然語言處理模型於自殺風險預測之研究
Research on the Impact of Incremental Utterance Input During Taiwan Lifeline Conversations on Natural Language Processing Models for Suicide Risk Prediction
作者 胡鈞彥
Hu, Chun-Yen
貢獻者 游琇婷
Yu, Hsiu-Ting
胡鈞彥
Hu, Chun-Yen
關鍵詞 安心專線
自殺風險預測
語句累積輸入
自然語言處理
LLMs
BERT
Vicuna
Taiwan Lifeline
suicide risk detection
incremental utterance input
natural language processing
LLMs
BERT
Vicuna
日期 2024
上傳時間 2-May-2024 10:34:13 (UTC+8)
摘要 自殺是全球重要的公共衛生議題之一,因其不僅影響個人及家庭,也可能影響國民心理健康與經濟發展,增加社會成本。有鑑於此,對自殺風險的即時偵測與介入顯得尤為重要。過去研究利用自然語言處理技術進行自動化自殺風險偵測,但多基於線上平台的資料,如社交媒體或論壇貼文,較缺乏使用真實對話資料進行分析的研究。安心專線是台灣提供線上心理協談服務的通話專線,其資料性質不同於一般的「靜態」文本資料,對話中即時互動提供更豐富的心理訊息,因此本研究以安心專線通話資料進行自殺風險偵測分析,使用將通話記錄謄寫成逐字稿的文本資料進行自殺風險評估。此資料包含求助者與接線員間的對話文字記錄,以及由臨床心理學專家判定的求助者自殺風險等級。本研究主要關注兩個問題:一是如何利用通話中語句隨時間推進的特性來進行更精確的自殺風險評估;二是使用SBERT計算句子相似性時,自殺量表與自殺關聯詞典等不同的參照語句對自殺風險偵測模型的性能有何影響。本研究預期能更了解自然語言處理模型在不同對話階段的表現,且增進模型對自殺風險的早期識別能力,從而提供更即時的協助給自殺防治的實務工作者。
Suicide constitutes a major public health concern worldwide, affecting not only individuals and their families but also national mental wellbeing and economic growth, which adds to societal costs. Therefore, the immediate detection and intervention of suicide risk are imperative. Past research has employed natural language processing (NLP) for automated detection of suicide risk, mostly relying on data from online platforms such as social media or forums, lacking studies using actual conversational data. In Taiwan, the Lifeline hotline provides online psychological counseling, and its data, unlike standard ‘static’ text, contains real-time interactive dialogue offering deeper psychological insights. Therefore, this research utilizes transcribed data from these calls for suicide risk detection analysis. The dataset includes transcripts of conversations between callers and counselors, evaluated by a clinical psychologist for suicide risk levels. This study addresses two main issues. First, utilizing the temporal progression of sentences in calls for more precise suicide risk assessments. Second, examining how using SBERT to compute sentence similarity with various reference sentences, such as suicide scales and suicide-related dictionaries, affects the performance of suicide risk detection models. The objective is to enhance understanding of NLP models across different stages of dialogue and improve early detection capabilities of suicide risk, ultimately providing timely aid and information to frontline workers in suicide prevention.
參考文獻 中文文獻 張壽山(2000)。貝克自殺意念量表(BSS)中文版。台北市:中國行為科社。 許文耀、鍾瑞玫(1997)。「自殺危險程度量表」的編製及其信、效度考驗。中華心理衛生期刊,10(2),1─17。 黃亭翰(2023)。應用深度學習語言模型於偵測安心專線中自殺訊息之研究(未出版碩士論文)。國立臺灣師範大學教育心理與輔導學系,台北市。 衛生福利部(2023年7月27日)。歷年全國自殺死亡資料統計暨自殺通報統計(更新至111年)。2023年11月7日,取自https://dep.mohw.gov.tw/DOMHAOH/cp-4904-8883-107.html 外文文獻 Ananthakrishnan, G., Jayaraman, A. K., Trueman, T. E., Mitra, S., Abinesh, A. K., & Murugappan, A. (2022). Suicidal Intention Detection in Tweets Using BERT-Based Transformers. In 2022 International Conference on Computing, Communication, and Intelligent Systems (ICCCIS). 322-327. IEEE. https://ieeexplore.ieee.org/document/10037677 Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., Agarwal, S., Herbert-Voss, A., Krueger, G., Henighan, T., Child, R., Ramesh, A., Ziegler, D. M., Wu, J., Winter, C., Hesse, C., ... Amodei, D. (2020). Language models are few-shot learners. Advances in neural information processing systems, 33, 1877-1901. https://doi.org/10.48550/arXiv.2005.14165 Cao, L., Zhang, H., Feng, L., Wei, Z., Wang, X., Li, N., & He, X. (2019). Latent suicide risk detection on microblog via suicide-oriented word embeddings and layered attention. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), (pp. 1718–1728). https://aclanthology.org/D19-1181/ Castillo-Sánchez, G., Marques, G., Dorronzoro, E., Rivera-Romero, O., Franco-Martín, M., & De la Torre-Díez, I. (2020). Suicide Risk Assessment Using Machine Learning and Social Networks: a Scoping Review. Journal of Medical Systems, 44(12). https://doi.org/10.1007/s10916-020-01669-5 Centers for Disease Control and Prevention. (2023, August 10). Suicide prevention. Retrieved September 16, 2023, from https://www.cdc.gov/suicide/index.html Chiang, W.-L., Li, Z., Lin, Z., Sheng, Y., Wu, Z., Zhang, H., Zheng, L., Zhuang, S., Zhuang, Y., Gonzalez, J. E., Stoica, I., & Xing, E. P. (2023, March 30). Vicuña: An open-source chatbot impressing GPT-4 with 90%* ChatGPT quality. LMSYS. Retrieved April 2, 2024, from https://lmsys.org/blog/2023-03-30-vicuna/ Chung, H. W., Hou, L., Longpre, S., Zoph, B., Tay, Y., Fedus, W., Li, Y., Wang, X., Dehghani, M., Brahma, S., Webson, A., Gu, S. S., Dai, Z., Suzgun, M., Chen, X., Chowdhery, A., Castro-Ros, A., Pellat, M., Robinson, K., ..., & Wei, J. (2022). Scaling instruction-finetuned language models. arXiv preprint arXiv:2210.11416. https://doi.org/10.48550/arXiv.2210.11416 Devlin, J., Chang, M., Lee, K., & Toutanova, K. (2019). BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. North American Chapter of the Association for Computational Linguistics. 4171-4186. https://aclanthology.org/N19-1423/ Google Cloud Tech. (2024, March 28). Introduction to Generative AI [Video]. YouTube. Retrieved April 1, 2024, from https://www.youtube.com/watch?v=G2fqAlgmoPo Gu, X., Yoo, K. M., & Ha, J. W. (2021, May). Dialogbert: Discourse-aware response generation via learning to recover and rank utterances. In Proceedings of the AAAI Conference on Artificial Intelligence, 35(14). 12911-12919. https://doi.org/10.1609/aaai.v35i14.17527 Haque, A., Reddi, V., & Giallanza, T. (2021). Deep learning for suicide and depression identification with unsupervised label correction. In Artificial Neural Networks and Machine Learning–ICANN 2021. 436-447. https://doi.org/10.1007/978-3-030-86383-8_35 Hua, Y., Liu, F., Yang, K., Li, Z., Sheu, Y., Zhou, P., Moran, L. V., Ananiadou, S., & Beam, A. (2024). Large language models in mental health care: A scoping review. arXiv preprint arXiv:2401.02984. https://doi.org/10.48550/arXiv.2401.02984 Izmaylov, D., Segal, A., Gal, K., Grimland, M., & Levi-Belz, Y. (2023). Combining Psychological Theory with Language Models for Suicide Risk Detection. In Findings of the Association for Computational Linguistics: EACL 2023. 2385-2393. https://aclanthology.org/2023.findings-eacl.184/ Ji, S., Pan, S., Li, X., Cambria, E., Long, G., & Huang, Z. (2020). Suicidal ideation detection: A review of machine learning methods and applications. IEEE Transactions on Computational Social Systems, 8(1), 214-226. https://ieeexplore.ieee.org/document/9199553 Lv, M., Li, A., Liu, T., & Zhu, T. (2015). Creating a Chinese suicide dictionary for identifying suicide risk on social media. PeerJ, 3, e1455. https://doi.org/10.7717/peerj.1455 OpenAI. (2022, November 30). Introducing ChatGPT. OpenAI Blog. Retrieved April 1, 2024, from https://openai.com/blog/chatgpt Radford, A., Narasimhan, K., Salimans, T., & Sutskever, I. (2018). Improving language understanding by generative pre-training. Retrieved September 16, 2023, from https://cdn.openai.com/research-covers/language-unsupervised/language_understanding_paper.pdf Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., & Sutskever, I. (2019). Language models are unsupervised multitask learners. Retrieved September 16, 2023, from https://api.semanticscholar.org/CorpusID:160025533 Sawhney, R., Joshi, H., Gandhi, S., & Shah, R. (2020). A time-aware transformer based model for suicide ideation detection on social media. In Proceedings of the 2020 conference on empirical methods in natural language processing (EMNLP). 7685-7697. https://aclanthology.org/2020.emnlp-main.619/ Sawhney, R., Agarwal, S., Neerkaje, A. T., Aletras, N., Nakov, P., & Flek, L. (2022). Towards suicide ideation detection through online conversational context. In Proceedings of the 45th international ACM SIGIR conference on research and development in information retrieval. 1716-1727. https://doi.org/10.1145/3477495.3532068 Taori, R., Gulrajani, I., Zhang, T., Dubois, Y., Li, X., Guestrin, C., Liang P. & Hashimoto, T. B. (2023, March 13). Alpaca: A strong, replicable instruction-following model. Stanford Center for Research on Foundation Models. Retrieved April 2, 2024, from https://crfm.stanford.edu/2023/03/13/alpaca.html Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M. A., Lacroix, T., Rozière B., Goyal N., Hambro E., Azhar F., Rodriguez A., Joulin A., Grave E. & Lample, G. (2023). LLaMA: Open and Efficient Foundation Language Models. arXiv preprint arXiv:2302.13971. https://doi.org/10.48550/arXiv.2302.13971 Tsai, Y. S. & Chen, A. L. P. (2023). Suicide risk assessment using word-level model with dictionary-based risky posts selection. Multimedia Tools and Applications. 1-20. https://doi.org/10.1007/s11042-023-16361-2 Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, L., & Polosukhin, I. (2017). Attention is all you need. In Advances in neural information processing systems, 30. https://doi.org/10.48550/arXiv.1706.03762 Wang, R., Yang, B. X., Ma, Y., Wang, P., Yu, Q., Zong, X., Huang, Z., Ma, S., Hu, L., Hwang, K. & Liu, Z. (2021). Medical-level suicide risk analysis: A novel standard and evaluation model. IEEE Internet of Things Journal, 8(23), 16825-16834. https://ieeexplore.ieee.org/document/9328867 Wang, Y. T., Huang, H. H., Chen, H. H., & Chen, H. (2018). A Neural Network Approach to Early Risk Detection of Depression and Anorexia on Social Media Text. In Conference and Labs of the Evaluation Forum. 1-8. https://www.semanticscholar.org/paper/A-Neural-Network-Approach-to-Early-Risk-Detection-Wang-Huang/ceeae88a5e4b289a1e2b4dfc40e3b7b86826bf34 Wang, Z., Huang, P., Hsu, W., & Huang, H. (2023). Self-Adapted Utterance Selection for Suicidal Ideation Detection in Lifeline Conversations. In Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics. 1428-1438. https://aclanthology.org/2023.eacl-main.105/ World Health Organization. (2023, August 28). Suicide. Retrieved September 16, 2023, from https://www.who.int/news-room/fact-sheets/detail/suicide Xu, Z., Xu, Y., Cheung, F., Cheng, M., Lung, D., Law, Y. W., Chiang, B., Zhang, Q., & Yip, P. S. (2021). Detecting suicide risk using knowledge-aware natural language processing and counseling service data. Social Science & Medicine, 283. https://doi.org/10.1016/j.socscimed.2021.114176 Yang, K., Ji, S., Zhang, T., Xie, Q., Kuang, Z., & Ananiadou, S. (2023). Towards interpretable mental health analysis with large language models. In The 2023 Conference on Empirical Methods in Natural Language Processing. https://doi.org/10.48550/arXiv.2304.03347 Yang, K., Zhang, T., Kuang, Z., Xie, Q., & Ananiadou, S. (2024). Mentalllama: Interpretable mental health analysis on social media with large language models. arXiv preprint arXiv:2309.13567. https://doi.org/10.1145/3589334.3648137 Zhu, Y., Kiros, R., Zemel, R., Salakhutdinov, R., Urtasun, R., Torralba, A., & Fidler, S. (2015). Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. In Proceedings of the IEEE international conference on computer vision. 19-27. https://doi.org/10.48550/arXiv.1506.06724
描述 碩士
國立政治大學
心理學系
110752002
資料來源 http://thesis.lib.nccu.edu.tw/record/#G0110752002
資料類型 thesis
dc.contributor.advisor 游琇婷zh_TW
dc.contributor.advisor Yu, Hsiu-Tingen_US
dc.contributor.author (Authors) 胡鈞彥zh_TW
dc.contributor.author (Authors) Hu, Chun-Yenen_US
dc.creator (作者) 胡鈞彥zh_TW
dc.creator (作者) Hu, Chun-Yenen_US
dc.date (日期) 2024en_US
dc.date.accessioned 2-May-2024 10:34:13 (UTC+8)-
dc.date.available 2-May-2024 10:34:13 (UTC+8)-
dc.date.issued (上傳時間) 2-May-2024 10:34:13 (UTC+8)-
dc.identifier (Other Identifiers) G0110752002en_US
dc.identifier.uri (URI) https://nccur.lib.nccu.edu.tw/handle/140.119/151094-
dc.description (描述) 碩士zh_TW
dc.description (描述) 國立政治大學zh_TW
dc.description (描述) 心理學系zh_TW
dc.description (描述) 110752002zh_TW
dc.description.abstract (摘要) 自殺是全球重要的公共衛生議題之一,因其不僅影響個人及家庭,也可能影響國民心理健康與經濟發展,增加社會成本。有鑑於此,對自殺風險的即時偵測與介入顯得尤為重要。過去研究利用自然語言處理技術進行自動化自殺風險偵測,但多基於線上平台的資料,如社交媒體或論壇貼文,較缺乏使用真實對話資料進行分析的研究。安心專線是台灣提供線上心理協談服務的通話專線,其資料性質不同於一般的「靜態」文本資料,對話中即時互動提供更豐富的心理訊息,因此本研究以安心專線通話資料進行自殺風險偵測分析,使用將通話記錄謄寫成逐字稿的文本資料進行自殺風險評估。此資料包含求助者與接線員間的對話文字記錄,以及由臨床心理學專家判定的求助者自殺風險等級。本研究主要關注兩個問題:一是如何利用通話中語句隨時間推進的特性來進行更精確的自殺風險評估;二是使用SBERT計算句子相似性時,自殺量表與自殺關聯詞典等不同的參照語句對自殺風險偵測模型的性能有何影響。本研究預期能更了解自然語言處理模型在不同對話階段的表現,且增進模型對自殺風險的早期識別能力,從而提供更即時的協助給自殺防治的實務工作者。zh_TW
dc.description.abstract (摘要) Suicide constitutes a major public health concern worldwide, affecting not only individuals and their families but also national mental wellbeing and economic growth, which adds to societal costs. Therefore, the immediate detection and intervention of suicide risk are imperative. Past research has employed natural language processing (NLP) for automated detection of suicide risk, mostly relying on data from online platforms such as social media or forums, lacking studies using actual conversational data. In Taiwan, the Lifeline hotline provides online psychological counseling, and its data, unlike standard ‘static’ text, contains real-time interactive dialogue offering deeper psychological insights. Therefore, this research utilizes transcribed data from these calls for suicide risk detection analysis. The dataset includes transcripts of conversations between callers and counselors, evaluated by a clinical psychologist for suicide risk levels. This study addresses two main issues. First, utilizing the temporal progression of sentences in calls for more precise suicide risk assessments. Second, examining how using SBERT to compute sentence similarity with various reference sentences, such as suicide scales and suicide-related dictionaries, affects the performance of suicide risk detection models. The objective is to enhance understanding of NLP models across different stages of dialogue and improve early detection capabilities of suicide risk, ultimately providing timely aid and information to frontline workers in suicide prevention.en_US
dc.description.tableofcontents 謝辭 i 摘要 ii Abstract iii 目錄 iv 表目錄 vi 圖目錄 vii 緒論 1 研究背景 1 自殺風險偵測研究 1 安心專線資料 2 研究目的 2 文獻回顧 4 自然語言處理 4 深度學習 5 大型語言模型 7 自殺風險偵測研究 12 研究方法 18 研究資料 19 隨機森林模型建立程序 25 BERT模型建立程序 25 Vicuna模型應用程序 26 模型評估 27 模型評估指標 27 模型設定 30 研究結果 34 隨機森林模型 35 BERT模型 41 Vicuna模型 47 討論與結論 53 隨機森林模型與BERT模型 53 不同參照材料 53 Vicuna模型與分辨式語言模型 54 研究貢獻 55 研究限制 55 參考文獻 56 中文文獻 56 外文文獻 56 附錄 61 附錄一、Vicuna模型輸入提示 61zh_TW
dc.format.extent 3417422 bytes-
dc.format.mimetype application/pdf-
dc.source.uri (資料來源) http://thesis.lib.nccu.edu.tw/record/#G0110752002en_US
dc.subject (關鍵詞) 安心專線zh_TW
dc.subject (關鍵詞) 自殺風險預測zh_TW
dc.subject (關鍵詞) 語句累積輸入zh_TW
dc.subject (關鍵詞) 自然語言處理zh_TW
dc.subject (關鍵詞) LLMszh_TW
dc.subject (關鍵詞) BERTzh_TW
dc.subject (關鍵詞) Vicunazh_TW
dc.subject (關鍵詞) Taiwan Lifelineen_US
dc.subject (關鍵詞) suicide risk detectionen_US
dc.subject (關鍵詞) incremental utterance inputen_US
dc.subject (關鍵詞) natural language processingen_US
dc.subject (關鍵詞) LLMsen_US
dc.subject (關鍵詞) BERTen_US
dc.subject (關鍵詞) Vicunaen_US
dc.title (題名) 安心專線對話過程中語句累積對自然語言處理模型於自殺風險預測之研究zh_TW
dc.title (題名) Research on the Impact of Incremental Utterance Input During Taiwan Lifeline Conversations on Natural Language Processing Models for Suicide Risk Predictionen_US
dc.type (資料類型) thesisen_US
dc.relation.reference (參考文獻) 中文文獻 張壽山(2000)。貝克自殺意念量表(BSS)中文版。台北市:中國行為科社。 許文耀、鍾瑞玫(1997)。「自殺危險程度量表」的編製及其信、效度考驗。中華心理衛生期刊,10(2),1─17。 黃亭翰(2023)。應用深度學習語言模型於偵測安心專線中自殺訊息之研究(未出版碩士論文)。國立臺灣師範大學教育心理與輔導學系,台北市。 衛生福利部(2023年7月27日)。歷年全國自殺死亡資料統計暨自殺通報統計(更新至111年)。2023年11月7日,取自https://dep.mohw.gov.tw/DOMHAOH/cp-4904-8883-107.html 外文文獻 Ananthakrishnan, G., Jayaraman, A. K., Trueman, T. E., Mitra, S., Abinesh, A. K., & Murugappan, A. (2022). Suicidal Intention Detection in Tweets Using BERT-Based Transformers. In 2022 International Conference on Computing, Communication, and Intelligent Systems (ICCCIS). 322-327. IEEE. https://ieeexplore.ieee.org/document/10037677 Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., Agarwal, S., Herbert-Voss, A., Krueger, G., Henighan, T., Child, R., Ramesh, A., Ziegler, D. M., Wu, J., Winter, C., Hesse, C., ... Amodei, D. (2020). Language models are few-shot learners. Advances in neural information processing systems, 33, 1877-1901. https://doi.org/10.48550/arXiv.2005.14165 Cao, L., Zhang, H., Feng, L., Wei, Z., Wang, X., Li, N., & He, X. (2019). Latent suicide risk detection on microblog via suicide-oriented word embeddings and layered attention. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), (pp. 1718–1728). https://aclanthology.org/D19-1181/ Castillo-Sánchez, G., Marques, G., Dorronzoro, E., Rivera-Romero, O., Franco-Martín, M., & De la Torre-Díez, I. (2020). Suicide Risk Assessment Using Machine Learning and Social Networks: a Scoping Review. Journal of Medical Systems, 44(12). https://doi.org/10.1007/s10916-020-01669-5 Centers for Disease Control and Prevention. (2023, August 10). Suicide prevention. Retrieved September 16, 2023, from https://www.cdc.gov/suicide/index.html Chiang, W.-L., Li, Z., Lin, Z., Sheng, Y., Wu, Z., Zhang, H., Zheng, L., Zhuang, S., Zhuang, Y., Gonzalez, J. E., Stoica, I., & Xing, E. P. (2023, March 30). Vicuña: An open-source chatbot impressing GPT-4 with 90%* ChatGPT quality. LMSYS. Retrieved April 2, 2024, from https://lmsys.org/blog/2023-03-30-vicuna/ Chung, H. W., Hou, L., Longpre, S., Zoph, B., Tay, Y., Fedus, W., Li, Y., Wang, X., Dehghani, M., Brahma, S., Webson, A., Gu, S. S., Dai, Z., Suzgun, M., Chen, X., Chowdhery, A., Castro-Ros, A., Pellat, M., Robinson, K., ..., & Wei, J. (2022). Scaling instruction-finetuned language models. arXiv preprint arXiv:2210.11416. https://doi.org/10.48550/arXiv.2210.11416 Devlin, J., Chang, M., Lee, K., & Toutanova, K. (2019). BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. North American Chapter of the Association for Computational Linguistics. 4171-4186. https://aclanthology.org/N19-1423/ Google Cloud Tech. (2024, March 28). Introduction to Generative AI [Video]. YouTube. Retrieved April 1, 2024, from https://www.youtube.com/watch?v=G2fqAlgmoPo Gu, X., Yoo, K. M., & Ha, J. W. (2021, May). Dialogbert: Discourse-aware response generation via learning to recover and rank utterances. In Proceedings of the AAAI Conference on Artificial Intelligence, 35(14). 12911-12919. https://doi.org/10.1609/aaai.v35i14.17527 Haque, A., Reddi, V., & Giallanza, T. (2021). Deep learning for suicide and depression identification with unsupervised label correction. In Artificial Neural Networks and Machine Learning–ICANN 2021. 436-447. https://doi.org/10.1007/978-3-030-86383-8_35 Hua, Y., Liu, F., Yang, K., Li, Z., Sheu, Y., Zhou, P., Moran, L. V., Ananiadou, S., & Beam, A. (2024). Large language models in mental health care: A scoping review. arXiv preprint arXiv:2401.02984. https://doi.org/10.48550/arXiv.2401.02984 Izmaylov, D., Segal, A., Gal, K., Grimland, M., & Levi-Belz, Y. (2023). Combining Psychological Theory with Language Models for Suicide Risk Detection. In Findings of the Association for Computational Linguistics: EACL 2023. 2385-2393. https://aclanthology.org/2023.findings-eacl.184/ Ji, S., Pan, S., Li, X., Cambria, E., Long, G., & Huang, Z. (2020). Suicidal ideation detection: A review of machine learning methods and applications. IEEE Transactions on Computational Social Systems, 8(1), 214-226. https://ieeexplore.ieee.org/document/9199553 Lv, M., Li, A., Liu, T., & Zhu, T. (2015). Creating a Chinese suicide dictionary for identifying suicide risk on social media. PeerJ, 3, e1455. https://doi.org/10.7717/peerj.1455 OpenAI. (2022, November 30). Introducing ChatGPT. OpenAI Blog. Retrieved April 1, 2024, from https://openai.com/blog/chatgpt Radford, A., Narasimhan, K., Salimans, T., & Sutskever, I. (2018). Improving language understanding by generative pre-training. Retrieved September 16, 2023, from https://cdn.openai.com/research-covers/language-unsupervised/language_understanding_paper.pdf Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., & Sutskever, I. (2019). Language models are unsupervised multitask learners. Retrieved September 16, 2023, from https://api.semanticscholar.org/CorpusID:160025533 Sawhney, R., Joshi, H., Gandhi, S., & Shah, R. (2020). A time-aware transformer based model for suicide ideation detection on social media. In Proceedings of the 2020 conference on empirical methods in natural language processing (EMNLP). 7685-7697. https://aclanthology.org/2020.emnlp-main.619/ Sawhney, R., Agarwal, S., Neerkaje, A. T., Aletras, N., Nakov, P., & Flek, L. (2022). Towards suicide ideation detection through online conversational context. In Proceedings of the 45th international ACM SIGIR conference on research and development in information retrieval. 1716-1727. https://doi.org/10.1145/3477495.3532068 Taori, R., Gulrajani, I., Zhang, T., Dubois, Y., Li, X., Guestrin, C., Liang P. & Hashimoto, T. B. (2023, March 13). Alpaca: A strong, replicable instruction-following model. Stanford Center for Research on Foundation Models. Retrieved April 2, 2024, from https://crfm.stanford.edu/2023/03/13/alpaca.html Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M. A., Lacroix, T., Rozière B., Goyal N., Hambro E., Azhar F., Rodriguez A., Joulin A., Grave E. & Lample, G. (2023). LLaMA: Open and Efficient Foundation Language Models. arXiv preprint arXiv:2302.13971. https://doi.org/10.48550/arXiv.2302.13971 Tsai, Y. S. & Chen, A. L. P. (2023). Suicide risk assessment using word-level model with dictionary-based risky posts selection. Multimedia Tools and Applications. 1-20. https://doi.org/10.1007/s11042-023-16361-2 Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, L., & Polosukhin, I. (2017). Attention is all you need. In Advances in neural information processing systems, 30. https://doi.org/10.48550/arXiv.1706.03762 Wang, R., Yang, B. X., Ma, Y., Wang, P., Yu, Q., Zong, X., Huang, Z., Ma, S., Hu, L., Hwang, K. & Liu, Z. (2021). Medical-level suicide risk analysis: A novel standard and evaluation model. IEEE Internet of Things Journal, 8(23), 16825-16834. https://ieeexplore.ieee.org/document/9328867 Wang, Y. T., Huang, H. H., Chen, H. H., & Chen, H. (2018). A Neural Network Approach to Early Risk Detection of Depression and Anorexia on Social Media Text. In Conference and Labs of the Evaluation Forum. 1-8. https://www.semanticscholar.org/paper/A-Neural-Network-Approach-to-Early-Risk-Detection-Wang-Huang/ceeae88a5e4b289a1e2b4dfc40e3b7b86826bf34 Wang, Z., Huang, P., Hsu, W., & Huang, H. (2023). Self-Adapted Utterance Selection for Suicidal Ideation Detection in Lifeline Conversations. In Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics. 1428-1438. https://aclanthology.org/2023.eacl-main.105/ World Health Organization. (2023, August 28). Suicide. Retrieved September 16, 2023, from https://www.who.int/news-room/fact-sheets/detail/suicide Xu, Z., Xu, Y., Cheung, F., Cheng, M., Lung, D., Law, Y. W., Chiang, B., Zhang, Q., & Yip, P. S. (2021). Detecting suicide risk using knowledge-aware natural language processing and counseling service data. Social Science & Medicine, 283. https://doi.org/10.1016/j.socscimed.2021.114176 Yang, K., Ji, S., Zhang, T., Xie, Q., Kuang, Z., & Ananiadou, S. (2023). Towards interpretable mental health analysis with large language models. In The 2023 Conference on Empirical Methods in Natural Language Processing. https://doi.org/10.48550/arXiv.2304.03347 Yang, K., Zhang, T., Kuang, Z., Xie, Q., & Ananiadou, S. (2024). Mentalllama: Interpretable mental health analysis on social media with large language models. arXiv preprint arXiv:2309.13567. https://doi.org/10.1145/3589334.3648137 Zhu, Y., Kiros, R., Zemel, R., Salakhutdinov, R., Urtasun, R., Torralba, A., & Fidler, S. (2015). Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. In Proceedings of the IEEE international conference on computer vision. 19-27. https://doi.org/10.48550/arXiv.1506.06724zh_TW