Publications-Theses
Article View/Open
Publication Export
-
Google ScholarTM
NCCU Library
Citation Infomation
Related Publications in TAIR
題名 劇透預警:以敘事結構自動偵測劇透
Spoiler Alert: Automatically Detecting Spoilers as Part of Narrative Structure作者 余盈蓓
Yu, Ying-Pei貢獻者 張瑜芸
Chang, Yu-Yun
余盈蓓
Yu, Ying-Pei關鍵詞 劇透偵測
敘事理論
深度學習
電影評論
Spoiler Detection
Narrative Extraction
Narrative Theory
Online Movie Reviews日期 2025 上傳時間 3-Mar-2025 15:25:57 (UTC+8) 摘要 本研究以敘事理論為基礎,提出透過結合深度學習模型及敘事語言特徵改進中文劇透偵測之方法。劇透作為伴隨著社群媒體及網路論壇之便利性而來的缺點,已成為大眾關注之議題。為避免使用者因接收到未預期之電影內容而產生對平台的負面情緒,各大平台皆提供劇透提醒之服務。使用者可以在撰寫評論時將其標註為含有劇透以提醒其他使用者。然而因該服務非強制性且劇透之定義因人而異,仍有許多改善空間。因此本研究提出以敘事理論作為基礎,能夠自動偵測評論中劇透的 BERT 模型。 該模型在預訓練的 BERT 模型基礎上,加入敘事理論中提及之敘事特有語言特徵進行微調,期能增進模型偵測中文劇透的表現。研究結果顯示 BERT(F-score:0.74,正確預測之劇透數:238)在加上敘事特徵進行訓練(F-score:0.75,正確預測之劇透數:256)後,對於劇透的敏感度有所提升。研究透過對 BERT 自我注意機制之分析發現,動詞種類、名詞種類、核心依存關係、置後時間詞、時貌標記、及代名詞之使用對劇透偵測有較大的影響力。此外,在針對分類錯誤案例的分析中,文字主觀性高低程度對劇透分類的影響性也指出未來研究之可行方向。 整體而言,本研究透過結合語言學觀點的敘事理論及深度學習技術,提供提升自動偵測中文劇透效用的解決方法。
This study proposed an approach to combine narrative theory and deep learning models for spoiler detection in Chinese online reviews. Spoiler is the inconvenience brought about social media and online forums. When people share their thoughts on fictive works such as books and movies, they may accidentally reveal some important plots in these works, which the other users do not wish to know before watching them. We extracted the characteristics of narrative discourse as features with thematic analysis and structural analysis. A BERT model is trained with these narrative features encoded. The results showed that BERT trained with narrative features (F-score:0.75, correct spans: 256) performed better than BERT trained without narrative features (F-score:0.74, correct spans: 238). With analysis on the attention mechanism in our model, we found verbs, nouns, core argument relations, time-related linguistic devices, and pronouns especially helpful for spoiler detection. Moreover, with error analysis, a possibility for subjectivity being a critical element for spoiler detection was discovered. In conclusion, this study proved that spoiler detection can be seen as a narrative extraction task and be improved by doing narrative analysis.參考文獻 1. Bamberg, M. (2012). Narrative analysis. In APA handbook of research methods in psychology, Vol 2: Research designs: Quantitative, qualitative, neuropsychological, and biological. (pp. 85–102). American Psychological Association. 2. Bao, A., Ho, M., & Sangamnerkar, S. (2021). Spoiler Alert: Using Natural Language 3. Processing to Detect Spoilers in Book Reviews. arXiv preprint arXiv:2102.03882. 4. Baynham, M. (2015). Narrative and Space/Time. In The Handbook of Narrative Analysis (pp. 117–139). Wiley Online Library. 5. Bestgen, Y., & Costermans, J. (1994). Time, space, and action: Exploring the narrative structure and its linguistic marking. Discourse Processes, 17(3), 421–446. 6. Boyd-Graber, J., Glasgow, K., & Zajac, J. S. (2013). Spoiler alert: Machine learning approaches to detect social media posts with revelatory information. Proceedings of the American Society for Information Science and Technology, 50(1), 1–9. 7. Chang, B., Kim, H., Kim, R., Kim, D., & Kang, J. (2018). A deep neural spoiler detection model using a genre-aware attention mechanism. Advances in Knowledge Discovery and Data Mining: 22nd Pacific-Asia Conference, PAKDD 2018, Melbourne, VIC, Australia, June 3-6, 2018, Proceedings, Part I 22, 183–195. 8. Chang, B., Lee, I., Kim, H., & Kang, J. (2021). “Killing Me” Is Not a Spoiler: Spoiler Detection Model using Graph Neural Networks with Dependency Relation-Aware Attention Mechanism. arXiv preprint arXiv:2101.05972. 9. Chatman, S. B. (1978). Story and discourse: Narrative structure in fiction and film. Cornell university press. 10. Clark, K., Khandelwal, U., Levy, O., & Manning, C. D. (2019). What does BERT look at? an analysis of BERT’s attention. Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, 276–286. 11. Costermans, J., & Bestgen, Y. (1991). The role of temporal markers in the segmentation of narrative discourse. Cahiers de psychologie cognitive, 11(3), 349–370. 12. Daniel, T. A., & Katz, J. S. (2019). Spoilers affect the enjoyment of television episodes but not short stories. Psychological reports, 122(5), 1794–1807. 13. De Marneffe, M.-C., Dozat, T., Silveira, N., Haverinen, K., Ginter, F., Nivre, J., & Manning, C. D. (2014). Universal stanford dependencies: A cross-linguistic typology. LREC, 14, 4585–4592. 14. Devlin, J. (2018). Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. 15. Dry, H. (1981). Sentence aspect and the movement of narrative time. Text-Interdisciplinary Journal for the Study of Discourse, 1(3), 233–240. 16. Golbeck, J. (2012). The twitter mute button: A web filtering challenge. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2755–2758. 17. Grimshaw, J. (1990). Argument structure. The MIT Press. 18. Guo, S., & Ramakrishnan, N. (2010). Finding the storyteller: Automatic spoiler tagging using linguistic cues. Proceedings of the 23rd International Conference onComputational Linguistics (Coling 2010), 412–420. 19. Hijikata, Y., Iwai, H., & Nishida, S. (2016). Context-based plot detection from online review comments for preventing spoilers. 2016 IEEE/WIC/ACM International Conferences on Web Intelligence (WI), 57–65. 20. Hitzeman, J. (2007). Text type and the position of a temporal adverbial within the sentence. Annotating, Extracting and Reasoning about Time and Events: International Seminar, Dagstuhl Castle, Germany, April 10-15, 2005. Revised Papers, 29–40. 21. Iwai, H., Hijikata, Y., Ikeda, K., & Nishida, S. (2014). Sentence-based plot classification for online review comments. 2014 IEEE/WIC/ACM International Joint Conferences on Web Intelligence (WI) and Intelligent Agent Technologies (IAT), 1, 245–253. 22. Jawahar, G., Sagot, B., & Seddah, D. (2019). What does bert learn about the structure of language? ACL 2019-57th Annual Meeting of the Association for Computational Linguistics. 23. Jeon, S., Kim, S., & Yu, H. (2013). Don’t be spoiled by your friends: Spoiler detection in tv program tweets. Proceedings of the International AAAI Conference on Web and Social Media, 7(1), 681–684. 24. Johnson, B. K., & Rosenbaum, J. E. (2015). Spoiler alert: Consequences of narrative spoilers for dimensions of enjoyment, appreciation, and transportation. Communication Research, 42(8), 1068–1088. 25. Johnstone, B. (2005). Discourse analysis and narrative. In The handbook of discourse analysis (pp. 635–649). Wiley Online Library. 26. Kim, H., Park, Y., Lee, J., & Kang, J. (2019). Span-Level Spoiler Detection for Higher User Satisfaction. KIISE Transactions on Computing Practices, 25(1), 82–86. 27. Kobayashi, G., Kuribayashi, T., Yokoi, S., & Inui, K. (2020). Attention is not only a weight: Analyzing transformers with vector norms. Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), 7057–7075. 28. Labov, W., & Waletzky, J. (1997). Narrative analysis: Oral versions of personal experience. 29. Landis, J. R., & Koch, G. G. (1977). The measurement of observer agreement for categorical data. biometrics, 159–174. 30. Leavitt, J. D., & Christenfeld, N. J. (2011). Story spoilers don’t spoil stories. Psychological science, 22(9), 1152–1154. 31. Lin, T., Wang, Y., Liu, X., & Qiu, X. (2022). A survey of transformers. AI open, 3, 111–132. 32. Maeda, K., Hijikata, Y., & Nakamura, S. (2016). A basic study on spoiler detection from review comments using story documents. 2016 IEEE/WIC/ACM International Conferences on Web Intelligence (WI), 572–577. 33. Mareček, D., & Rosa, R. (2019). From balustrades to pierre vinken: Looking for syntax in transformer self-attentions. arXiv preprint arXiv:1906.01958. 34. Marukatat, R. (2020). A comparative study of using bag-of-words and word-embedding attributes in the spoiler classification of english and thai text. Applied Computing and Information Technology, 81–93. 35. Maxwell, L. C. (2022). Spoilers ahead, proceed with caution: How engagement, enjoyment, and fomo predict avoidance of spoilers. Psychology of Popular Media, 11(2), 163. 36. Nakamura, S., & Komatsu, T. (2012). Study of information clouding methods to prevent spoilers of sports match. Proceedings of the International Working Conference on Advanced Visual Interfaces, 661–664. 37. Nakamura, S., & Tanaka, K. (2007). Temporal filtering system to reduce the risk of spoiling a user’s enjoyment. Proceedings of the 12th international conference on Intelligent user interfaces, 345–348. 38. Nerbonne, J. (1986). Reference time and time in narration. Linguistics and Philosophy, 83–95. 39. Nguyen, T. H., & Grishman, R. (2018). Graph convolutional networks with argument-aware pooling for event detection. 40. Prince, G. (2012). Narratology: The form and functioning of narrative (Vol. 108). Walter de Gruyter. 41. Reif, E., Yuan, A., Wattenberg, M., Viegas, F. B., Coenen, A., Pearce, A., & Kim, B. (2019). Visualizing and measuring the geometry of bert. Advances in neural information processing systems, 32. 42. Riessman, C. K. (2005). Narrative analysis. In Narrative, Memory & Everyday Life. (pp. 1–7). University of Huddersfield. 43. Taboada, M. (2011). Stages in an online review genre. Text & Talk, 31(2), 247–269. 44. Tenney, I., Das, D., & Pavlick, E. (2019). BERT rediscovers the classical NLP pipeline. Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, 4593–4601. 45. Tesnière, L. (2015). Elements of structural syntax. John Benjamins Publishing Company. 46. Topolinski, S. (2014). A processing fluency-account of funniness: Running gags and spoiling punchlines. Cognition & emotion, 28(5), 811–820. 47. Ueno, A., Kamoda, Y., & Takubo, T. (2019). A spoiler detection method for japanese-written reviews of stories. International Journal of Innovative Computing Information and Control, 15(1), 189–198. 48. Vásquez, C. (2014). The Discourse of Online Consumer Reviews. Bloomsbury Publishing. 49. Wan, M., Misra, R., Nakashole, N., & McAuley, J. (2019). Fine-grained spoiler detection from large-scale review corpora. arXiv preprint arXiv:1905.13416. 50. Wang, A. (2018). Glue: A multi-task benchmark and analysis platform for natural language understanding. arXiv preprint arXiv:1804.07461. 51. Wróblewska, A., Rzepiński, P., & Sysko-Romańczuk, S. (2021). Spoiler in a Textstack: How Much Can Transformers Help? arXiv preprint arXiv:2112.12913. 52. Yang, Z., Yang, D., Dyer, C., He, X., Smola, A., & Hovy, E. (2016). Hierarchical attention networks for document classification. Proceedings of the 2016 conference of the North American chapter of the association for computational linguistics: human language technologies, 1480–1489.https://doi.org/10.18653/v1/N16-1174 描述 碩士
國立政治大學
語言學研究所
110555009資料來源 http://thesis.lib.nccu.edu.tw/record/#G0110555009 資料類型 thesis dc.contributor.advisor 張瑜芸 zh_TW dc.contributor.advisor Chang, Yu-Yun en_US dc.contributor.author (Authors) 余盈蓓 zh_TW dc.contributor.author (Authors) Yu, Ying-Pei en_US dc.creator (作者) 余盈蓓 zh_TW dc.creator (作者) Yu, Ying-Pei en_US dc.date (日期) 2025 en_US dc.date.accessioned 3-Mar-2025 15:25:57 (UTC+8) - dc.date.available 3-Mar-2025 15:25:57 (UTC+8) - dc.date.issued (上傳時間) 3-Mar-2025 15:25:57 (UTC+8) - dc.identifier (Other Identifiers) G0110555009 en_US dc.identifier.uri (URI) https://nccur.lib.nccu.edu.tw/handle/140.119/156080 - dc.description (描述) 碩士 zh_TW dc.description (描述) 國立政治大學 zh_TW dc.description (描述) 語言學研究所 zh_TW dc.description (描述) 110555009 zh_TW dc.description.abstract (摘要) 本研究以敘事理論為基礎,提出透過結合深度學習模型及敘事語言特徵改進中文劇透偵測之方法。劇透作為伴隨著社群媒體及網路論壇之便利性而來的缺點,已成為大眾關注之議題。為避免使用者因接收到未預期之電影內容而產生對平台的負面情緒,各大平台皆提供劇透提醒之服務。使用者可以在撰寫評論時將其標註為含有劇透以提醒其他使用者。然而因該服務非強制性且劇透之定義因人而異,仍有許多改善空間。因此本研究提出以敘事理論作為基礎,能夠自動偵測評論中劇透的 BERT 模型。 該模型在預訓練的 BERT 模型基礎上,加入敘事理論中提及之敘事特有語言特徵進行微調,期能增進模型偵測中文劇透的表現。研究結果顯示 BERT(F-score:0.74,正確預測之劇透數:238)在加上敘事特徵進行訓練(F-score:0.75,正確預測之劇透數:256)後,對於劇透的敏感度有所提升。研究透過對 BERT 自我注意機制之分析發現,動詞種類、名詞種類、核心依存關係、置後時間詞、時貌標記、及代名詞之使用對劇透偵測有較大的影響力。此外,在針對分類錯誤案例的分析中,文字主觀性高低程度對劇透分類的影響性也指出未來研究之可行方向。 整體而言,本研究透過結合語言學觀點的敘事理論及深度學習技術,提供提升自動偵測中文劇透效用的解決方法。 zh_TW dc.description.abstract (摘要) This study proposed an approach to combine narrative theory and deep learning models for spoiler detection in Chinese online reviews. Spoiler is the inconvenience brought about social media and online forums. When people share their thoughts on fictive works such as books and movies, they may accidentally reveal some important plots in these works, which the other users do not wish to know before watching them. We extracted the characteristics of narrative discourse as features with thematic analysis and structural analysis. A BERT model is trained with these narrative features encoded. The results showed that BERT trained with narrative features (F-score:0.75, correct spans: 256) performed better than BERT trained without narrative features (F-score:0.74, correct spans: 238). With analysis on the attention mechanism in our model, we found verbs, nouns, core argument relations, time-related linguistic devices, and pronouns especially helpful for spoiler detection. Moreover, with error analysis, a possibility for subjectivity being a critical element for spoiler detection was discovered. In conclusion, this study proved that spoiler detection can be seen as a narrative extraction task and be improved by doing narrative analysis. en_US dc.description.tableofcontents 誌謝 i 摘要 ii Abstract iii Table of Contents iv List of Figures vii List of Tables viii 1. Introduction 1 1.1 Background 1 1.2 Theoretical Framework: Narrative Theory 5 1.3 Research Gap and Motivation 6 1.4 Research Questions and Objectives 8 1.5 Significance of the Study 8 1.6 Organization of the Study 9 2.1 Literature Review 10 2.1 Narrative 10 2.2 Narrative Analysis 11 2.2.1 Thematic Analysis 11 2.2.2 Structural Analysis 13 2.3 Spoiler Detection 15 2.3.1 Keyword-based Spoiler Detection 15 2.3.2 Traditional Machine Learning-based Spoiler Detection 15 2.3.3 Deep Learning-based Spoiler Detection 16 2.3.4 Levels of Spoilers 17 3. Methodology 20 3.1 Dataset 20 3.2 Data Pre-processing 20 3.3 Annotation 21 3.4 Feature Extraction 25 3.4.1 Thematic Features 25 3.4.2 Structural Features 27 3.5 Experiment Setup 32 3.5.1 Model Selection 32 3.5.2 Model Construction 32 3.5.3 Model Training 34 3.6 Evaluation - Model Performance 36 3.6.1 Accuracy 37 3.6.2 Precision 37 3.6.3 Recall 37 3.6.4 F-score 37 3.7 Evaluation - Feature Importance 37 4. Results 40 4.1 Model Performance 40 4.2 Feature Importance 43 5. Discussion 46 5.1 Paragraph-level, Sentence-level, and Span-level Models 46 5.2 Thematic Narrative Analysis on Chinese Spoiler Detection 48 5.2.1 Verb Categories 48 5.2.2 Noun Categories 51 5.2.3 Dependency Relations 53 5.3 Structural Narrative Analysis on Chinese Spoiler Detection 55 5.3.1 Time-related words 55 5.3.2 Pronouns and Perspectives 58 5.4 Error Analysis 61 5.4.1 Narrative of Personal Experience 61 5.4.2 Opinions without stances 62 5.4.3 Information Loss in Long Spans 63 6. Conclusions 65 6.1 Summary of the Study 65 6.2 Contribution and Implication 66 6.3 Future Research 68 References 69 Appendix A: Mann-Whitney U Test 74 zh_TW dc.format.extent 8199715 bytes - dc.format.mimetype application/pdf - dc.source.uri (資料來源) http://thesis.lib.nccu.edu.tw/record/#G0110555009 en_US dc.subject (關鍵詞) 劇透偵測 zh_TW dc.subject (關鍵詞) 敘事理論 zh_TW dc.subject (關鍵詞) 深度學習 zh_TW dc.subject (關鍵詞) 電影評論 zh_TW dc.subject (關鍵詞) Spoiler Detection en_US dc.subject (關鍵詞) Narrative Extraction en_US dc.subject (關鍵詞) Narrative Theory en_US dc.subject (關鍵詞) Online Movie Reviews en_US dc.title (題名) 劇透預警:以敘事結構自動偵測劇透 zh_TW dc.title (題名) Spoiler Alert: Automatically Detecting Spoilers as Part of Narrative Structure en_US dc.type (資料類型) thesis en_US dc.relation.reference (參考文獻) 1. Bamberg, M. (2012). Narrative analysis. In APA handbook of research methods in psychology, Vol 2: Research designs: Quantitative, qualitative, neuropsychological, and biological. (pp. 85–102). American Psychological Association. 2. Bao, A., Ho, M., & Sangamnerkar, S. (2021). Spoiler Alert: Using Natural Language 3. Processing to Detect Spoilers in Book Reviews. arXiv preprint arXiv:2102.03882. 4. Baynham, M. (2015). Narrative and Space/Time. In The Handbook of Narrative Analysis (pp. 117–139). Wiley Online Library. 5. Bestgen, Y., & Costermans, J. (1994). Time, space, and action: Exploring the narrative structure and its linguistic marking. Discourse Processes, 17(3), 421–446. 6. Boyd-Graber, J., Glasgow, K., & Zajac, J. S. (2013). Spoiler alert: Machine learning approaches to detect social media posts with revelatory information. Proceedings of the American Society for Information Science and Technology, 50(1), 1–9. 7. Chang, B., Kim, H., Kim, R., Kim, D., & Kang, J. (2018). A deep neural spoiler detection model using a genre-aware attention mechanism. Advances in Knowledge Discovery and Data Mining: 22nd Pacific-Asia Conference, PAKDD 2018, Melbourne, VIC, Australia, June 3-6, 2018, Proceedings, Part I 22, 183–195. 8. Chang, B., Lee, I., Kim, H., & Kang, J. (2021). “Killing Me” Is Not a Spoiler: Spoiler Detection Model using Graph Neural Networks with Dependency Relation-Aware Attention Mechanism. arXiv preprint arXiv:2101.05972. 9. Chatman, S. B. (1978). Story and discourse: Narrative structure in fiction and film. Cornell university press. 10. Clark, K., Khandelwal, U., Levy, O., & Manning, C. D. (2019). What does BERT look at? an analysis of BERT’s attention. Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, 276–286. 11. Costermans, J., & Bestgen, Y. (1991). The role of temporal markers in the segmentation of narrative discourse. Cahiers de psychologie cognitive, 11(3), 349–370. 12. Daniel, T. A., & Katz, J. S. (2019). Spoilers affect the enjoyment of television episodes but not short stories. Psychological reports, 122(5), 1794–1807. 13. De Marneffe, M.-C., Dozat, T., Silveira, N., Haverinen, K., Ginter, F., Nivre, J., & Manning, C. D. (2014). Universal stanford dependencies: A cross-linguistic typology. LREC, 14, 4585–4592. 14. Devlin, J. (2018). Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. 15. Dry, H. (1981). Sentence aspect and the movement of narrative time. Text-Interdisciplinary Journal for the Study of Discourse, 1(3), 233–240. 16. Golbeck, J. (2012). The twitter mute button: A web filtering challenge. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2755–2758. 17. Grimshaw, J. (1990). Argument structure. The MIT Press. 18. Guo, S., & Ramakrishnan, N. (2010). Finding the storyteller: Automatic spoiler tagging using linguistic cues. Proceedings of the 23rd International Conference onComputational Linguistics (Coling 2010), 412–420. 19. Hijikata, Y., Iwai, H., & Nishida, S. (2016). Context-based plot detection from online review comments for preventing spoilers. 2016 IEEE/WIC/ACM International Conferences on Web Intelligence (WI), 57–65. 20. Hitzeman, J. (2007). Text type and the position of a temporal adverbial within the sentence. Annotating, Extracting and Reasoning about Time and Events: International Seminar, Dagstuhl Castle, Germany, April 10-15, 2005. Revised Papers, 29–40. 21. Iwai, H., Hijikata, Y., Ikeda, K., & Nishida, S. (2014). Sentence-based plot classification for online review comments. 2014 IEEE/WIC/ACM International Joint Conferences on Web Intelligence (WI) and Intelligent Agent Technologies (IAT), 1, 245–253. 22. Jawahar, G., Sagot, B., & Seddah, D. (2019). What does bert learn about the structure of language? ACL 2019-57th Annual Meeting of the Association for Computational Linguistics. 23. Jeon, S., Kim, S., & Yu, H. (2013). Don’t be spoiled by your friends: Spoiler detection in tv program tweets. Proceedings of the International AAAI Conference on Web and Social Media, 7(1), 681–684. 24. Johnson, B. K., & Rosenbaum, J. E. (2015). Spoiler alert: Consequences of narrative spoilers for dimensions of enjoyment, appreciation, and transportation. Communication Research, 42(8), 1068–1088. 25. Johnstone, B. (2005). Discourse analysis and narrative. In The handbook of discourse analysis (pp. 635–649). Wiley Online Library. 26. Kim, H., Park, Y., Lee, J., & Kang, J. (2019). Span-Level Spoiler Detection for Higher User Satisfaction. KIISE Transactions on Computing Practices, 25(1), 82–86. 27. Kobayashi, G., Kuribayashi, T., Yokoi, S., & Inui, K. (2020). Attention is not only a weight: Analyzing transformers with vector norms. Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), 7057–7075. 28. Labov, W., & Waletzky, J. (1997). Narrative analysis: Oral versions of personal experience. 29. Landis, J. R., & Koch, G. G. (1977). The measurement of observer agreement for categorical data. biometrics, 159–174. 30. Leavitt, J. D., & Christenfeld, N. J. (2011). Story spoilers don’t spoil stories. Psychological science, 22(9), 1152–1154. 31. Lin, T., Wang, Y., Liu, X., & Qiu, X. (2022). A survey of transformers. AI open, 3, 111–132. 32. Maeda, K., Hijikata, Y., & Nakamura, S. (2016). A basic study on spoiler detection from review comments using story documents. 2016 IEEE/WIC/ACM International Conferences on Web Intelligence (WI), 572–577. 33. Mareček, D., & Rosa, R. (2019). From balustrades to pierre vinken: Looking for syntax in transformer self-attentions. arXiv preprint arXiv:1906.01958. 34. Marukatat, R. (2020). A comparative study of using bag-of-words and word-embedding attributes in the spoiler classification of english and thai text. Applied Computing and Information Technology, 81–93. 35. Maxwell, L. C. (2022). Spoilers ahead, proceed with caution: How engagement, enjoyment, and fomo predict avoidance of spoilers. Psychology of Popular Media, 11(2), 163. 36. Nakamura, S., & Komatsu, T. (2012). Study of information clouding methods to prevent spoilers of sports match. Proceedings of the International Working Conference on Advanced Visual Interfaces, 661–664. 37. Nakamura, S., & Tanaka, K. (2007). Temporal filtering system to reduce the risk of spoiling a user’s enjoyment. Proceedings of the 12th international conference on Intelligent user interfaces, 345–348. 38. Nerbonne, J. (1986). Reference time and time in narration. Linguistics and Philosophy, 83–95. 39. Nguyen, T. H., & Grishman, R. (2018). Graph convolutional networks with argument-aware pooling for event detection. 40. Prince, G. (2012). Narratology: The form and functioning of narrative (Vol. 108). Walter de Gruyter. 41. Reif, E., Yuan, A., Wattenberg, M., Viegas, F. B., Coenen, A., Pearce, A., & Kim, B. (2019). Visualizing and measuring the geometry of bert. Advances in neural information processing systems, 32. 42. Riessman, C. K. (2005). Narrative analysis. In Narrative, Memory & Everyday Life. (pp. 1–7). University of Huddersfield. 43. Taboada, M. (2011). Stages in an online review genre. Text & Talk, 31(2), 247–269. 44. Tenney, I., Das, D., & Pavlick, E. (2019). BERT rediscovers the classical NLP pipeline. Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, 4593–4601. 45. Tesnière, L. (2015). Elements of structural syntax. John Benjamins Publishing Company. 46. Topolinski, S. (2014). A processing fluency-account of funniness: Running gags and spoiling punchlines. Cognition & emotion, 28(5), 811–820. 47. Ueno, A., Kamoda, Y., & Takubo, T. (2019). A spoiler detection method for japanese-written reviews of stories. International Journal of Innovative Computing Information and Control, 15(1), 189–198. 48. Vásquez, C. (2014). The Discourse of Online Consumer Reviews. Bloomsbury Publishing. 49. Wan, M., Misra, R., Nakashole, N., & McAuley, J. (2019). Fine-grained spoiler detection from large-scale review corpora. arXiv preprint arXiv:1905.13416. 50. Wang, A. (2018). Glue: A multi-task benchmark and analysis platform for natural language understanding. arXiv preprint arXiv:1804.07461. 51. Wróblewska, A., Rzepiński, P., & Sysko-Romańczuk, S. (2021). Spoiler in a Textstack: How Much Can Transformers Help? arXiv preprint arXiv:2112.12913. 52. Yang, Z., Yang, D., Dyer, C., He, X., Smola, A., & Hovy, E. (2016). Hierarchical attention networks for document classification. Proceedings of the 2016 conference of the North American chapter of the association for computational linguistics: human language technologies, 1480–1489.https://doi.org/10.18653/v1/N16-1174 zh_TW
