Publications-Theses

Article View/Open

Publication Export

Google ScholarTM

NCCU Library

Citation Infomation

Related Publications in TAIR

題名 深度學習之中文歌詞段落情緒辨識
Deep Learning-Based Paragraph-Level Emotion Recognition of Chinese Song Lyrics
作者 標云
Biao, Yun
貢獻者 張瑜芸
Chang, Yu-Yun
標云
Biao, Yun
關鍵詞 深度學習
情感辨識
中文歌詞
效價
喚醒
BERT
敘事理論
Deep Learning
Emotion Recognition
Chinese Song Lyrics
Valence
Arousal
BERT
Narrative Theory
日期 2024
上傳時間 5-Aug-2024 15:06:03 (UTC+8)
摘要 本研究探討結合深度學習技術與敘事理論在中文歌曲歌詞段落情感識別中的應用。本研究之動機源於音樂在人類生活中的重要性、個性化音樂串流服務的興起以及日益增長的自動情感識別之需求。本研究以BERT模型實現,訓練BERT模型來預測中文歌曲歌詞中的效價(正面或負面情感傾向)、喚醒程度(情感激動強度)及其二者之交織狀態(情感象限)。敘事理論中的主題和結構分析的整合提供了對歌詞情感表達更深入的理解。實驗結果證明了該模型在情感分類中的效率和準確性,表明其在提升音樂推薦系統品質方面的潛在實用性。即所有用於預測情感的 BERT 模型,包括正面或負面情感傾向(Accuracy = 0.91,F-score = 0.90)、情感激動強度(Accuracy = 0.86,F-score = 0.86)以及情感象限的 BERT 模型(Accuracy = 0.77,F-score = 0.76)都優於正面或負面情感傾向(Accuracy = 0.68,F-score = 0.65)、情感激動強度(Accuracy = 0.65,F-score = 0.64)和情感象限(Accuracy = 0.48,F-score = 0.45)的基線模型。此外,通過敘事理論進行的錯誤分析確定了導致誤分類的關鍵因素,這些因素包括詞彙歧義、句法複雜性和敘事之流動性,這些都在準確解釋歌詞中發揮著重要作用。整體而言,本研究強調了將敘事分析與深度學習技術相結合的價值,以實現更為複雜和準確的中文歌曲歌詞情感辨識系統。
This study explores the implementation of deep learning techniques alongside narrative theory for paragraph-level emotion recognition in Chinese song lyrics. It is motivated by the integral role of music in human life and the growing demand for automatic emotion recognition systems driven by personalized music streaming services. We leverage the BERT model to implement and evaluate machine learning models trained to predict valence (positive or negative emotions), arousal (intensity of emotion), and their intertwined states (emotional quadrants) from Chinese song lyrics. The integration of thematic and structural analysis derived from narrative theory provides a deeper understanding of lyrics' emotional expression. Experimental results demonstrate the model's efficiency and accuracy in classifying emotions, indicating its potential utility in improving the quality of music recommendation systems. All BERT models for predicting valence (Accuracy = 0.91, F-score = 0.90), arousal (Accuracy = 0.86, F-score = 0.86) and quadrants (Accuracy = 0.77, F-score = 0.76) outperformed baseline models of valence (Accuracy = 0.68, F-score = 0.65), arousal (Accuracy = 0.65, F-score = 0.64), and quadrants (Accuracy = 0.48, F-score = 0.45). Furthermore, our error analysis, informed by narrative theory, identifies key factors contributing to misclassification. These factors include lexical ambiguity, syntactic complexity, and narrative flow, all of which play significant roles in the accurate interpretation of lyrics. Overall, this research underscores the value of blending narrative analysis with deep learning techniques to achieve a more sophisticated and accurate system for emotion recognition in Chinese song lyrics.
參考文獻 Abdillah, J., Asror, I., Wibowo, Y. F. A., et al. (2020). Emotion classification of song lyrics using bidirectional lstm method with glove word representation weighting. Jurnal RESTI (Rekayasa Sistem Dan Teknologi Informasi), 4(4), 723–729. Agrawal, Y., Shanker, R. G. R., & Alluri, V. (2021). Transformer-based approach towards music emotion recognition from lyrics. European Conference on Information Retrieval, 167–175. Ahonen, H., & Desideri, A. M. (2007). Group analytic music therapy. voices, 14, 686. Alorainy, W., Burnap, P., Liu, H., Javed, A., & Williams, M. L. (2018). Suspended accounts: A source of tweets with disgust and anger emotions for augmenting hate speech data sample. 2018 International Conference on Machine Learning and Cybernetics (ICMLC), 2, 581–586. An, Y., Sun, S., & Wang, S. (2017). Naive bayes classifiers for music emotion classification based on lyrics. 2017 IEEE/ACIS 16th International Conference on Computer and Information Science (ICIS), 635–638. Arumugam, D., et al. (2011). Emotion classification using facial expression. International Journal of Advanced Computer Science and Applications, 2(7). Baker, F., Wigram, T., Stott, D., & McFerran, K. (2008). Therapeutic songwriting in music therapy: Part i: Who are the therapists, who are the clients, and why is songwriting used? Nordic Journal of Music Therapy, 17(2), 105–123. Barradas, G. T., & Sakka, L. S. (2022). When words matter: A cross-cultural perspective on lyrics and their relationship to musical emotions. Psychology of Music, 50(2), 650–669. Besson, M., Faita, F., Peretz, I., Bonnel, A.-M., & Requin, J. (1998). Singing in the brain: Independence of lyrics and tunes. Psychological Science, 9(6), 494–498. Chaudhary, D., Singh, N. P., & Singh, S. (2021). Development of music emotion classification system using convolution neural network. International Journal of Speech Technology, 24, 571–580. Chiril, P., Pamungkas, E. W., Benamara, F., Moriceau, V., & Patti, V. (2022). Emotionally informed hate speech detection: A multi-target perspective. Cognitive Computation, 1–31. Desmet, B., & Hoste, V. (2013). Emotion detection in suicide notes. Expert Systems with Applications, 40(16), 6351–6358. Devlin, J., Chang, M.-W., Lee, K., & Toutanova, K. (2018). Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Edmonds, D., & Sedoc, J. (2021). Multi-emotion classification for song lyrics. Proceed- ings of the Eleventh Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis, 221–235. Ekman, P. (1992). Facial expressions of emotion: New findings, new questions. Fludernik, M. (2009). An introduction to narratology. Routledge. Frijda, N. H. (1986). The emotions. Cambridge University Press. Genette, G. (1988). Narrative discourse revisited. Cornell University Press. Guillemette, L., & Lévesque, C. (2016). Narratology [In Louis Hébert (Ed.), Signo [online], Rimouski (Quebec)]. http://www.signosemio.com/genette/narratology. asp Habermas, T. (2018). Kinds of emotional effects of narratives. In Emotion and narrative: Perspectives in autobiographical storytelling (pp. 97–121). Cambridge University Press. Hallam, S., Cross, I., & Thaut, M. (2009). Oxford handbook of music psychology. Oxford University Press. He, H., Jin, J., Xiong, Y., Chen, B., Sun, W., & Zhao, L. (2008). Language feature mining for music emotion classification via supervised learning from lyrics. Advances in Computation and Intelligence: Third International Symposium, ISICA 2008 Wuhan, China, December 19-21, 2008 Proceedings 3, 426–435. Herman, D., Phelan, J., Rabinowitz, P. J., Richardson, B., & Warhol, R. (2012). Narrative theory: Core concepts and critical debates. The Ohio State University Press. Hinchman, K. A., & Moore, D. W. (2013). Close reading: A cautionary interpretation. Journal of Adolescent & Adult Literacy, 56(6), 441–450. Houjeij, A., Hamieh, L., Mehdi, N., & Hajj, H. (2012). A novel approach for emotion classification based on fusion of text and speech. 2012 19th International Conference on Telecommunications (ICT), 1–6. Hu, X., & Downie, J. S. (2010). Improving mood classification in music digital libraries by combining lyrics and audio. Proceedings of the 10th annual joint conference on Digital libraries, 159–168. Hu, Y., Chen, X., & Yang, D. (2009). Lyric-based song emotion detection with affective lexicon and fuzzy clustering method. ISMIR, 123–128. Jain, S., & Wallace, B. C. (2019). Attention is not explanation. Proceedings of NAACLHLT, 3543–3556. Juslin, P. N., & Laukka, P. (2004). Expression, perception, and induction of musical emotions: A review and a questionnaire study of everyday listening. Journal of new music research, 33(3), 217–238. Kaan, S. (2021). Themes and narrative structures in the lyrics of hozier [B.S. thesis]. S. Kaan. Kim, M., & Kwon, H.-C. (2011). Lyrics-based emotion classification using feature selection by partial syntactic analysis. 2011 IEEE 23rd International Conference on Tools with Artificial Intelligence, 960–964. Ko, D. (2014). Lyric analysis of popular and original music with adolescents. Journal of Poetry Therapy, 27(4), 183–192. Kreuter, M. W., Green, M. C., Cappella, J. N., Slater, M. D., Wise, M. E., Storey, D., Clark, E. M., O’Keefe, D. J., Erwin, D. O., Holmes, K., et al. (2007). Narrative communication in cancer prevention and control: A framework to guide research and application. Annals of behavioral medicine, 33, 221–235. Lee, L.-H., Li, J.-H., & Yu, L.-C. (2022). Chinese emobank: Building valence-arousal resources for dimensional sentiment analysis. Transactions on Asian and Low- Resource Language Information Processing, 21(4), 1–18. Li, C., Li, J. W., Pun, S. H., & Chen, F. (2021). An erp study on the influence of lyric to song’s emotional state. 2021 10th International IEEE/EMBS Conference on Neural Engineering (NER), 933–936. Liao, J.-Y., Lin, Y.-H., Lin, K.-C., & Chang, J.-W. (2021). 以遷移學習改善深度神經網 路模型於中文歌詞情緒辨識 (using transfer learning to improve deep neural networks for lyrics emotion recognition in chinese). International Journal of Computational Linguistics & Chinese Language Processing, 26(2). Liu, T. Y. (2021). 台灣 2008 至 2020 年音樂治療相關碩士學位論文內容分析 (content analysis of music therapy-related master’s degree theses in taiwan from 2008 to 2020) [Doctoral dissertation]. Liu, Y., Liu, Y., Zhao, Y., & Hua, K. A. (2015). What strikes the strings of your heart?— feature mining for music emotion analysis. IEEE TRANSACTIONS on Affective computing, 6(3), 247–260. Luck, G., Toiviainen, P., Erkkilä, J., Lartillot, O., Riikkilä, K., Mäkelä, A., Pyhäluoto, K., Raine, H., Varkila, L., & Värri, J. (2008). Modelling the relationships be- tween emotional responses to, and musical content of, music therapy improvisations. Psychology of music, 36(1), 25–45. Ma, W.-Y., & Chen, K.-J. (2003). Introduction to CKIP Chinese word segmentation system for the first international Chinese word segmentation bakeoff. Proceedings of the Second SIGHAN Workshop on Chinese Language Processing, 168–171. https://doi.org/10.3115/1119250.1119276 Malheiro, R., Panda, R., Gomes, P., & Paiva, R. P. (2016). Emotionally-relevant features for classification and regression of music lyrics. IEEE Transactions on Affective Computing, 9(2), 240–254. McKinney, M., & Breebaart, J. (2003). Features for audio and music classification. Mohsin, M. A., & Beltiukov, A. (2019). Summarizing emotions from text using plutchik’ s wheel of emotions. 7th Scientific Conference on Information Technologies for Intelligent Decision Making Support (ITIDS 2019), 291–294. Mokhsin, M. B., Rosli, N. B., Adnan, W. A. W., & Manaf, N. A. (2014). Automatic music emotion classification using artificial neural network based on vocal and instrumental sound timbres. SoMeT, 3–14. Negus, K. (2012). Narrative, interpretation and the popular song. Musical Quarterly, 95(2-3), 368–395. Nicholls, D. (2007). Narrative theory as an analytical tool in the study of popular music texts. Music and Letters, 88(2), 297–315. Palmer, A. (2015). Narrative and minds in the traditional ballads of early country music. In Narrative theory, literature, and new media (pp. 205–220). Routledge. Plutchik, R. (1980). A general psychoevolutionary theory of emotion. In Theories of emotion (pp. 3–33). Elsevier. Rajesh, S., & Nalini, N. (2020). Musical instrument emotion recognition using deep recurrent neural network. Procedia Computer Science, 167, 16–25. Randle, Q. (2013). So what does” set fire to the rain” really mean? a typology for analyzing pop song lyrics using narrative theory and semiotics. MEIEA Journal, 13(1), 125–147. Revathy, V., Pillai, A. S., & Daneshfar, F. (2023). Lyemobert: Classification of lyrics’ emotion and recommendation using a pre-trained model. Procedia Computer Science, 218, 1196–1208. Riessman, C. (2005). Narrative analysis in narrative, memory, & everyday life. university of huddersfield, huddersfield. Rimé, B. (2009). Emotion elicits the social sharing of emotion: Theory and empirical review. Emotion review, 1(1), 60–85. Rolvsjord, R. (2001). Sophie learns to play her songs of tears: –a case study exploring the dialectics between didactic and psychotherapeutic music therapy practices. Nordic Journal of Music Therapy, 10(1), 77–85. Russell, J. A. (1980). A circumplex model of affect. Journal of personality and social psychology, 39(6), 1161. Russell, J. A. (2003). Core affect and the psychological construction of emotion. Psychological review, 110(1), 145. Ryan, M.-L. (2015). Texts, worlds, stories: Narrative worlds as cognitive and ontological concept. In Narrative theory, literature, and new media (pp. 11–28). Routledge. Salim, S., Iqbal, Z., & Iqbal, J. (2021). Emotion classification through product consumer reviews. Pakistan Journal of Engineering and Technology, 4(4), 35–40. Shi, W., & Feng, S. (2018). Research on music emotion classification based on lyrics and audio. 2018 IEEE 3rd Advanced Information Technology, Electronic and Automation Control Conference (IAEAC), 1154–1159. Shukla, S., Khanna, P., & Agrawal, K. K. (2017). Review on sentiment analysis on music. 2017 International Conference on Infocom Technologies and Unmanned Systems (Trends and Future Directions)(ICTUS), 777–780. Smith, B. H. (2016). What was “close reading”? a century of method in literary studies. The Minnesota Review, 2016(87), 57–75. Sundararajan, M., Taly, A., & Yan, Q. (2017). Axiomatic attribution for deep networks. International conference on machine learning, 3319–3328. Talebi, S., Tong, E., Li, A., Yamin, G., Zaharchuk, G., & Mofrad, M. R. (2024). Exploring the performance and explainability of fine-tuned bert models for neuroradiology protocol assignment. BMC Medical Informatics and Decision Making, 24(1), 40. Tan, E. S.-H. (1995). Film-induced affect as a witness emotion. Poetics, 23(1-2), 7–32. Thayer, R. E. (1990). The biopsychology of mood and arousal. Oxford University Press. Tzanetakis, G., & Cook, P. (2002). Musical genre classification of audio signals. IEEE Transactions on speech and audio processing, 10(5), 293–302. Ujlambkar, A. M., & Attar, V. Z. (2012). Automatic mood classification model for indian popular music. 2012 Sixth Asia Modelling Symposium, 7–12. Ullah, R., Amblee, N., Kim, W., & Lee, H. (2016). From valence to emotions: Exploring the distribution of emotions in online product reviews. Decision Support Systems, 81, 41–53. van Gulik, R., Vignoli, F., & van de Wetering, H. (2004). Mapping music in the palm of your hand, explore and discover your collection. Proceedings of the 5th In- ternational Conference on Music Information Retrieval. Wang, J., & Yang, Y. (2019). Deep learning based mood tagging for chinese song lyrics. arXiv preprint arXiv:1906.02135. Weninger, F., Eyben, F., Mortillaro, M., & Scherer, K. R. (2013). On the acoustics of emotion in audio: What speech, music, and sound have in common. Frontiers in psychology, 4, 51547. Wicentowski, R., & Sydes, M. R. (2012). Emotion detection in suicide notes using maximum entropy classification. Biomedical informatics insights, 5, BII–S8972. Wilson, T., Wiebe, J., & Hoffmann, P. (2005). Recognizing contextual polarity in phrase- level sentiment analysis. Proceedings of human language technology conference and conference on empirical methods in natural language processing, 347–354. Zad, S., & Finlayson, M. (2020). Systematic evaluation of a framework for unsupervised emotion recognition for narrative text. Proceedings of the First Joint Workshop on Narrative Understanding, Storylines, and Events, 26–37. Zhong, J., Cheng, Y., Yang, S., & Wen, L. (2012). Music sentiment classification integrating audio with lyrics. JOURNAL OF INFORMATION &COMPUTATIONAL SCIENCE, 9(1), 35–44.
描述 碩士
國立政治大學
語言學研究所
110555005
資料來源 http://thesis.lib.nccu.edu.tw/record/#G0110555005
資料類型 thesis
dc.contributor.advisor 張瑜芸zh_TW
dc.contributor.advisor Chang, Yu-Yunen_US
dc.contributor.author (Authors) 標云zh_TW
dc.contributor.author (Authors) Biao, Yunen_US
dc.creator (作者) 標云zh_TW
dc.creator (作者) Biao, Yunen_US
dc.date (日期) 2024en_US
dc.date.accessioned 5-Aug-2024 15:06:03 (UTC+8)-
dc.date.available 5-Aug-2024 15:06:03 (UTC+8)-
dc.date.issued (上傳時間) 5-Aug-2024 15:06:03 (UTC+8)-
dc.identifier (Other Identifiers) G0110555005en_US
dc.identifier.uri (URI) https://nccur.lib.nccu.edu.tw/handle/140.119/152959-
dc.description (描述) 碩士zh_TW
dc.description (描述) 國立政治大學zh_TW
dc.description (描述) 語言學研究所zh_TW
dc.description (描述) 110555005zh_TW
dc.description.abstract (摘要) 本研究探討結合深度學習技術與敘事理論在中文歌曲歌詞段落情感識別中的應用。本研究之動機源於音樂在人類生活中的重要性、個性化音樂串流服務的興起以及日益增長的自動情感識別之需求。本研究以BERT模型實現,訓練BERT模型來預測中文歌曲歌詞中的效價(正面或負面情感傾向)、喚醒程度(情感激動強度)及其二者之交織狀態(情感象限)。敘事理論中的主題和結構分析的整合提供了對歌詞情感表達更深入的理解。實驗結果證明了該模型在情感分類中的效率和準確性,表明其在提升音樂推薦系統品質方面的潛在實用性。即所有用於預測情感的 BERT 模型,包括正面或負面情感傾向(Accuracy = 0.91,F-score = 0.90)、情感激動強度(Accuracy = 0.86,F-score = 0.86)以及情感象限的 BERT 模型(Accuracy = 0.77,F-score = 0.76)都優於正面或負面情感傾向(Accuracy = 0.68,F-score = 0.65)、情感激動強度(Accuracy = 0.65,F-score = 0.64)和情感象限(Accuracy = 0.48,F-score = 0.45)的基線模型。此外,通過敘事理論進行的錯誤分析確定了導致誤分類的關鍵因素,這些因素包括詞彙歧義、句法複雜性和敘事之流動性,這些都在準確解釋歌詞中發揮著重要作用。整體而言,本研究強調了將敘事分析與深度學習技術相結合的價值,以實現更為複雜和準確的中文歌曲歌詞情感辨識系統。zh_TW
dc.description.abstract (摘要) This study explores the implementation of deep learning techniques alongside narrative theory for paragraph-level emotion recognition in Chinese song lyrics. It is motivated by the integral role of music in human life and the growing demand for automatic emotion recognition systems driven by personalized music streaming services. We leverage the BERT model to implement and evaluate machine learning models trained to predict valence (positive or negative emotions), arousal (intensity of emotion), and their intertwined states (emotional quadrants) from Chinese song lyrics. The integration of thematic and structural analysis derived from narrative theory provides a deeper understanding of lyrics' emotional expression. Experimental results demonstrate the model's efficiency and accuracy in classifying emotions, indicating its potential utility in improving the quality of music recommendation systems. All BERT models for predicting valence (Accuracy = 0.91, F-score = 0.90), arousal (Accuracy = 0.86, F-score = 0.86) and quadrants (Accuracy = 0.77, F-score = 0.76) outperformed baseline models of valence (Accuracy = 0.68, F-score = 0.65), arousal (Accuracy = 0.65, F-score = 0.64), and quadrants (Accuracy = 0.48, F-score = 0.45). Furthermore, our error analysis, informed by narrative theory, identifies key factors contributing to misclassification. These factors include lexical ambiguity, syntactic complexity, and narrative flow, all of which play significant roles in the accurate interpretation of lyrics. Overall, this research underscores the value of blending narrative analysis with deep learning techniques to achieve a more sophisticated and accurate system for emotion recognition in Chinese song lyrics.en_US
dc.description.tableofcontents 謝誌 i 摘要 ii Abstract iii Table of Contents iv List of Figures vii List of Tables viii 1 Introduction 1 1.1 Background and Motivation 1 1.2 Theoretical Framework 4 1.3 Research Gaps 5 1.4 Research Objectives and Questions 6 1.5 Organization of the Study 7 2 Literature Review 8 2.1 Emotion Theory and Schema 8 2.1.1 Dimensional Emotion Schema 8 2.1.2 Categorical Emotion Schema 10 2.2 Machine Learning in Music Emotion Recognition 12 2.2.1 Audio-Based Music Emotion Recognition 13 2.2.2 Lyrics-Based Music Emotion Recognition 14 2.2.3 Combination of Audio and Lyrics-Based Music Emotion Recognition 16 2.3 Narrative Theory 17 2.3.1 Overview of Narrative Theory 17 2.3.2 Narrative and Emotion 18 2.3.3 Open and Close Reading in Narrative Analysis 21 2.3.4 Employing Narrative Analysis on Analyzing Song Lyrics 21 3 Methodology 26 3.1 Data Source 26 3.2 Data Pre-processing 28 3.3 Data Segmentation 28 3.4 Data Annotation 31 3.4.1 Annotation Details 32 3.5 Feature Extraction 38 3.6 BERT 41 3.7 Experimental Setup 42 3.7.1 Model Training for Predicting Valence and Arousal 43 3.7.2 Model Training for Predicting Intertwined Valence and Arousal 44 3.7.3 Model Evaluation 45 3.7.4 Statistical Test 46 4 Results 48 4.1 Model Training Results 48 4.1.1 Results of Valence Models 48 4.1.2 Results of Arousal Models 52 4.1.3 Results of Intertwined Valence and Arousal Models 55 4.2 Statistical Results 46 4.2.1 Attribution Method and Statistical Approach 58 4.2.2 Results of Valence Statistical Test 61 4.2.3 Results of Arousal Statistical Test 62 4.2.4 Results of Intertwined Valence and Arousal Statistical Test 63 5 Discussion 69 5.1 Song-Level Labeling and Paragraph-Level Labeling 69 5.2 Narrative Analysis of Chinese Song Lyrics 73 5.2.1 Thematic and Structural Analysis of Valence Classification 73 5.2.2 Thematic and Structural Analysis of Arousal Classification 78 5.2.3 Thematic and Structural Analysis of Quadrants Classification 81 5.3 Error Analysis 87 5.3.1 Misclassification of Valence 87 5.3.2 Misclassification of Arousal 89 5.3.3 Misclassification of Intertwined Valence and Arousal 91 6 Conclusions 94 6.1 Summary of The Study 94 6.2 Contributions and Implications 95 6.3 Future Research 96 References 98zh_TW
dc.format.extent 9311010 bytes-
dc.format.mimetype application/pdf-
dc.source.uri (資料來源) http://thesis.lib.nccu.edu.tw/record/#G0110555005en_US
dc.subject (關鍵詞) 深度學習zh_TW
dc.subject (關鍵詞) 情感辨識zh_TW
dc.subject (關鍵詞) 中文歌詞zh_TW
dc.subject (關鍵詞) 效價zh_TW
dc.subject (關鍵詞) 喚醒zh_TW
dc.subject (關鍵詞) BERTzh_TW
dc.subject (關鍵詞) 敘事理論zh_TW
dc.subject (關鍵詞) Deep Learningen_US
dc.subject (關鍵詞) Emotion Recognitionen_US
dc.subject (關鍵詞) Chinese Song Lyricsen_US
dc.subject (關鍵詞) Valenceen_US
dc.subject (關鍵詞) Arousalen_US
dc.subject (關鍵詞) BERTen_US
dc.subject (關鍵詞) Narrative Theoryen_US
dc.title (題名) 深度學習之中文歌詞段落情緒辨識zh_TW
dc.title (題名) Deep Learning-Based Paragraph-Level Emotion Recognition of Chinese Song Lyricsen_US
dc.type (資料類型) thesisen_US
dc.relation.reference (參考文獻) Abdillah, J., Asror, I., Wibowo, Y. F. A., et al. (2020). Emotion classification of song lyrics using bidirectional lstm method with glove word representation weighting. Jurnal RESTI (Rekayasa Sistem Dan Teknologi Informasi), 4(4), 723–729. Agrawal, Y., Shanker, R. G. R., & Alluri, V. (2021). Transformer-based approach towards music emotion recognition from lyrics. European Conference on Information Retrieval, 167–175. Ahonen, H., & Desideri, A. M. (2007). Group analytic music therapy. voices, 14, 686. Alorainy, W., Burnap, P., Liu, H., Javed, A., & Williams, M. L. (2018). Suspended accounts: A source of tweets with disgust and anger emotions for augmenting hate speech data sample. 2018 International Conference on Machine Learning and Cybernetics (ICMLC), 2, 581–586. An, Y., Sun, S., & Wang, S. (2017). Naive bayes classifiers for music emotion classification based on lyrics. 2017 IEEE/ACIS 16th International Conference on Computer and Information Science (ICIS), 635–638. Arumugam, D., et al. (2011). Emotion classification using facial expression. International Journal of Advanced Computer Science and Applications, 2(7). Baker, F., Wigram, T., Stott, D., & McFerran, K. (2008). Therapeutic songwriting in music therapy: Part i: Who are the therapists, who are the clients, and why is songwriting used? Nordic Journal of Music Therapy, 17(2), 105–123. Barradas, G. T., & Sakka, L. S. (2022). When words matter: A cross-cultural perspective on lyrics and their relationship to musical emotions. Psychology of Music, 50(2), 650–669. Besson, M., Faita, F., Peretz, I., Bonnel, A.-M., & Requin, J. (1998). Singing in the brain: Independence of lyrics and tunes. Psychological Science, 9(6), 494–498. Chaudhary, D., Singh, N. P., & Singh, S. (2021). Development of music emotion classification system using convolution neural network. International Journal of Speech Technology, 24, 571–580. Chiril, P., Pamungkas, E. W., Benamara, F., Moriceau, V., & Patti, V. (2022). Emotionally informed hate speech detection: A multi-target perspective. Cognitive Computation, 1–31. Desmet, B., & Hoste, V. (2013). Emotion detection in suicide notes. Expert Systems with Applications, 40(16), 6351–6358. Devlin, J., Chang, M.-W., Lee, K., & Toutanova, K. (2018). Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Edmonds, D., & Sedoc, J. (2021). Multi-emotion classification for song lyrics. Proceed- ings of the Eleventh Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis, 221–235. Ekman, P. (1992). Facial expressions of emotion: New findings, new questions. Fludernik, M. (2009). An introduction to narratology. Routledge. Frijda, N. H. (1986). The emotions. Cambridge University Press. Genette, G. (1988). Narrative discourse revisited. Cornell University Press. Guillemette, L., & Lévesque, C. (2016). Narratology [In Louis Hébert (Ed.), Signo [online], Rimouski (Quebec)]. http://www.signosemio.com/genette/narratology. asp Habermas, T. (2018). Kinds of emotional effects of narratives. In Emotion and narrative: Perspectives in autobiographical storytelling (pp. 97–121). Cambridge University Press. Hallam, S., Cross, I., & Thaut, M. (2009). Oxford handbook of music psychology. Oxford University Press. He, H., Jin, J., Xiong, Y., Chen, B., Sun, W., & Zhao, L. (2008). Language feature mining for music emotion classification via supervised learning from lyrics. Advances in Computation and Intelligence: Third International Symposium, ISICA 2008 Wuhan, China, December 19-21, 2008 Proceedings 3, 426–435. Herman, D., Phelan, J., Rabinowitz, P. J., Richardson, B., & Warhol, R. (2012). Narrative theory: Core concepts and critical debates. The Ohio State University Press. Hinchman, K. A., & Moore, D. W. (2013). Close reading: A cautionary interpretation. Journal of Adolescent & Adult Literacy, 56(6), 441–450. Houjeij, A., Hamieh, L., Mehdi, N., & Hajj, H. (2012). A novel approach for emotion classification based on fusion of text and speech. 2012 19th International Conference on Telecommunications (ICT), 1–6. Hu, X., & Downie, J. S. (2010). Improving mood classification in music digital libraries by combining lyrics and audio. Proceedings of the 10th annual joint conference on Digital libraries, 159–168. Hu, Y., Chen, X., & Yang, D. (2009). Lyric-based song emotion detection with affective lexicon and fuzzy clustering method. ISMIR, 123–128. Jain, S., & Wallace, B. C. (2019). Attention is not explanation. Proceedings of NAACLHLT, 3543–3556. Juslin, P. N., & Laukka, P. (2004). Expression, perception, and induction of musical emotions: A review and a questionnaire study of everyday listening. Journal of new music research, 33(3), 217–238. Kaan, S. (2021). Themes and narrative structures in the lyrics of hozier [B.S. thesis]. S. Kaan. Kim, M., & Kwon, H.-C. (2011). Lyrics-based emotion classification using feature selection by partial syntactic analysis. 2011 IEEE 23rd International Conference on Tools with Artificial Intelligence, 960–964. Ko, D. (2014). Lyric analysis of popular and original music with adolescents. Journal of Poetry Therapy, 27(4), 183–192. Kreuter, M. W., Green, M. C., Cappella, J. N., Slater, M. D., Wise, M. E., Storey, D., Clark, E. M., O’Keefe, D. J., Erwin, D. O., Holmes, K., et al. (2007). Narrative communication in cancer prevention and control: A framework to guide research and application. Annals of behavioral medicine, 33, 221–235. Lee, L.-H., Li, J.-H., & Yu, L.-C. (2022). Chinese emobank: Building valence-arousal resources for dimensional sentiment analysis. Transactions on Asian and Low- Resource Language Information Processing, 21(4), 1–18. Li, C., Li, J. W., Pun, S. H., & Chen, F. (2021). An erp study on the influence of lyric to song’s emotional state. 2021 10th International IEEE/EMBS Conference on Neural Engineering (NER), 933–936. Liao, J.-Y., Lin, Y.-H., Lin, K.-C., & Chang, J.-W. (2021). 以遷移學習改善深度神經網 路模型於中文歌詞情緒辨識 (using transfer learning to improve deep neural networks for lyrics emotion recognition in chinese). International Journal of Computational Linguistics & Chinese Language Processing, 26(2). Liu, T. Y. (2021). 台灣 2008 至 2020 年音樂治療相關碩士學位論文內容分析 (content analysis of music therapy-related master’s degree theses in taiwan from 2008 to 2020) [Doctoral dissertation]. Liu, Y., Liu, Y., Zhao, Y., & Hua, K. A. (2015). What strikes the strings of your heart?— feature mining for music emotion analysis. IEEE TRANSACTIONS on Affective computing, 6(3), 247–260. Luck, G., Toiviainen, P., Erkkilä, J., Lartillot, O., Riikkilä, K., Mäkelä, A., Pyhäluoto, K., Raine, H., Varkila, L., & Värri, J. (2008). Modelling the relationships be- tween emotional responses to, and musical content of, music therapy improvisations. Psychology of music, 36(1), 25–45. Ma, W.-Y., & Chen, K.-J. (2003). Introduction to CKIP Chinese word segmentation system for the first international Chinese word segmentation bakeoff. Proceedings of the Second SIGHAN Workshop on Chinese Language Processing, 168–171. https://doi.org/10.3115/1119250.1119276 Malheiro, R., Panda, R., Gomes, P., & Paiva, R. P. (2016). Emotionally-relevant features for classification and regression of music lyrics. IEEE Transactions on Affective Computing, 9(2), 240–254. McKinney, M., & Breebaart, J. (2003). Features for audio and music classification. Mohsin, M. A., & Beltiukov, A. (2019). Summarizing emotions from text using plutchik’ s wheel of emotions. 7th Scientific Conference on Information Technologies for Intelligent Decision Making Support (ITIDS 2019), 291–294. Mokhsin, M. B., Rosli, N. B., Adnan, W. A. W., & Manaf, N. A. (2014). Automatic music emotion classification using artificial neural network based on vocal and instrumental sound timbres. SoMeT, 3–14. Negus, K. (2012). Narrative, interpretation and the popular song. Musical Quarterly, 95(2-3), 368–395. Nicholls, D. (2007). Narrative theory as an analytical tool in the study of popular music texts. Music and Letters, 88(2), 297–315. Palmer, A. (2015). Narrative and minds in the traditional ballads of early country music. In Narrative theory, literature, and new media (pp. 205–220). Routledge. Plutchik, R. (1980). A general psychoevolutionary theory of emotion. In Theories of emotion (pp. 3–33). Elsevier. Rajesh, S., & Nalini, N. (2020). Musical instrument emotion recognition using deep recurrent neural network. Procedia Computer Science, 167, 16–25. Randle, Q. (2013). So what does” set fire to the rain” really mean? a typology for analyzing pop song lyrics using narrative theory and semiotics. MEIEA Journal, 13(1), 125–147. Revathy, V., Pillai, A. S., & Daneshfar, F. (2023). Lyemobert: Classification of lyrics’ emotion and recommendation using a pre-trained model. Procedia Computer Science, 218, 1196–1208. Riessman, C. (2005). Narrative analysis in narrative, memory, & everyday life. university of huddersfield, huddersfield. Rimé, B. (2009). Emotion elicits the social sharing of emotion: Theory and empirical review. Emotion review, 1(1), 60–85. Rolvsjord, R. (2001). Sophie learns to play her songs of tears: –a case study exploring the dialectics between didactic and psychotherapeutic music therapy practices. Nordic Journal of Music Therapy, 10(1), 77–85. Russell, J. A. (1980). A circumplex model of affect. Journal of personality and social psychology, 39(6), 1161. Russell, J. A. (2003). Core affect and the psychological construction of emotion. Psychological review, 110(1), 145. Ryan, M.-L. (2015). Texts, worlds, stories: Narrative worlds as cognitive and ontological concept. In Narrative theory, literature, and new media (pp. 11–28). Routledge. Salim, S., Iqbal, Z., & Iqbal, J. (2021). Emotion classification through product consumer reviews. Pakistan Journal of Engineering and Technology, 4(4), 35–40. Shi, W., & Feng, S. (2018). Research on music emotion classification based on lyrics and audio. 2018 IEEE 3rd Advanced Information Technology, Electronic and Automation Control Conference (IAEAC), 1154–1159. Shukla, S., Khanna, P., & Agrawal, K. K. (2017). Review on sentiment analysis on music. 2017 International Conference on Infocom Technologies and Unmanned Systems (Trends and Future Directions)(ICTUS), 777–780. Smith, B. H. (2016). What was “close reading”? a century of method in literary studies. The Minnesota Review, 2016(87), 57–75. Sundararajan, M., Taly, A., & Yan, Q. (2017). Axiomatic attribution for deep networks. International conference on machine learning, 3319–3328. Talebi, S., Tong, E., Li, A., Yamin, G., Zaharchuk, G., & Mofrad, M. R. (2024). Exploring the performance and explainability of fine-tuned bert models for neuroradiology protocol assignment. BMC Medical Informatics and Decision Making, 24(1), 40. Tan, E. S.-H. (1995). Film-induced affect as a witness emotion. Poetics, 23(1-2), 7–32. Thayer, R. E. (1990). The biopsychology of mood and arousal. Oxford University Press. Tzanetakis, G., & Cook, P. (2002). Musical genre classification of audio signals. IEEE Transactions on speech and audio processing, 10(5), 293–302. Ujlambkar, A. M., & Attar, V. Z. (2012). Automatic mood classification model for indian popular music. 2012 Sixth Asia Modelling Symposium, 7–12. Ullah, R., Amblee, N., Kim, W., & Lee, H. (2016). From valence to emotions: Exploring the distribution of emotions in online product reviews. Decision Support Systems, 81, 41–53. van Gulik, R., Vignoli, F., & van de Wetering, H. (2004). Mapping music in the palm of your hand, explore and discover your collection. Proceedings of the 5th In- ternational Conference on Music Information Retrieval. Wang, J., & Yang, Y. (2019). Deep learning based mood tagging for chinese song lyrics. arXiv preprint arXiv:1906.02135. Weninger, F., Eyben, F., Mortillaro, M., & Scherer, K. R. (2013). On the acoustics of emotion in audio: What speech, music, and sound have in common. Frontiers in psychology, 4, 51547. Wicentowski, R., & Sydes, M. R. (2012). Emotion detection in suicide notes using maximum entropy classification. Biomedical informatics insights, 5, BII–S8972. Wilson, T., Wiebe, J., & Hoffmann, P. (2005). Recognizing contextual polarity in phrase- level sentiment analysis. Proceedings of human language technology conference and conference on empirical methods in natural language processing, 347–354. Zad, S., & Finlayson, M. (2020). Systematic evaluation of a framework for unsupervised emotion recognition for narrative text. Proceedings of the First Joint Workshop on Narrative Understanding, Storylines, and Events, 26–37. Zhong, J., Cheng, Y., Yang, S., & Wen, L. (2012). Music sentiment classification integrating audio with lyrics. JOURNAL OF INFORMATION &COMPUTATIONAL SCIENCE, 9(1), 35–44.zh_TW