Please use this identifier to cite or link to this item: https://ah.lib.nccu.edu.tw/handle/140.119/131904
DC FieldValueLanguage
dc.contributor.advisor黃瀚萱<br>陳宜秀zh_TW
dc.contributor.advisorHuang, Hen-Hsen<br>Chen, Yi-Hsiuen_US
dc.contributor.author姚璇亨zh_TW
dc.contributor.authorYao, Hsuan-Hengen_US
dc.creator姚璇亨zh_TW
dc.creatorYao, Hsuan-Hengen_US
dc.date2020en_US
dc.date.accessioned2020-09-02T05:08:46Z-
dc.date.available2020-09-02T05:08:46Z-
dc.date.issued2020-09-02T05:08:46Z-
dc.identifierG0107462012en_US
dc.identifier.urihttp://nccur.lib.nccu.edu.tw/handle/140.119/131904-
dc.description碩士zh_TW
dc.description國立政治大學zh_TW
dc.description數位內容碩士學位學程zh_TW
dc.description107462012zh_TW
dc.description.abstract對食物、飲料的品味行為是人類文明發展與日常生活的重要元素。現今,基於機器學習與深度學習技術的發展與快速累積的多媒體資訊,透過輸入影像、聲音資料進行學習,便能為電腦模型建立對視覺、聽覺屬性一定程度的辨識與預測能力,相較之下,針對味覺與嗅覺屬性的相關研究則略顯缺乏。現今主流的味覺、嗅覺相關感知研究多來自食品科學界廣泛應用於消費品產業之感官分析方法,由受過專業訓練或業餘的測試者針對研究目標進行感官品評,以主觀感受給予評論、評分或排序,並建立對應之感官屬性資料以進行研究分析,然而,其研究成果易受限於評論者的主觀偏好與感知能力,在客觀性與通用性方面仍有改進空間。本研究以網際網路酒飲網站所提供專家感官品評資料與消費者評論文本為基礎,透過自然語言處理之情緒分析技術,建立兼具相對客觀參考價值與主觀感覺意義的分析途徑,並延伸應用與於公開資料集進行測試,期透過網際網路數位內容文本與資料探勘技術的整合運用,為品味相關研究領域帶來更具通用性與成本效益的研究方法。zh_TW
dc.description.tableofcontents第一章 緒論 5\n第二章 文獻探討 10\n2.1 感官分析研究 10\n2.2 情感計算 11\n2.3 以深度學習為基礎之情緒分析 13\n2.4 多類別分類任務 18\n第三章 研究方法 20\n3.1 研究方法架構 20\n3.2 資料集 23\n3.3 多類別分類模型設計 27\n3.4 多類別分類模型評估標準 29\n3.5 感官分析實驗設計 31\n3.6 感官分析實驗評估 34\n3.7 Yelp 資料集測試實驗設計與評估 37\n第四章 研究結果 40\n4.1 多類別分類模型評估 40\n4.2 電腦模型預測與一般消費者評測統計結果分析 43\n4.3 電腦模型預測與一般消費者評測感知模式相關性分析 46\n4.4 Yelp 資料集測試結果 48\n第五章 結論 51\n參考文獻 53zh_TW
dc.format.extent2332021 bytes-
dc.format.mimetypeapplication/pdf-
dc.source.urihttp://thesis.lib.nccu.edu.tw/record/#G0107462012en_US
dc.subject自然語言處理zh_TW
dc.subject意見探勘zh_TW
dc.subject文本情緒分析zh_TW
dc.subject深度學習zh_TW
dc.subject情感計算zh_TW
dc.subject多類別分類zh_TW
dc.subject感官分析zh_TW
dc.subject感官品評zh_TW
dc.title基於感知屬性的品味分析: 以酒飲網站專業與消費者評論為例zh_TW
dc.titleTaste Analysis Based on Sensory Characteristics: a Case Study Using Expert and Consumer Reviews from an Alcoholic Beverage Websiteen_US
dc.typethesisen_US
dc.relation.referenceVaswani, A., et al., Attention is all you need, in Proceedings of the 31st International Conference on Neural Information Processing Systems. 2017, Curran Associates Inc.: Long Beach, California, USA. p. 6000–6010.\n2. Soleymani, M., et al., A survey of multimodal sentiment analysis. Image and Vision Computing, 2017. 65: p. 3-14.\n3. Lehrer, K. and A. Lehrer, The language of taste. Inquiry, 2016. 59(6): p. 752-765.\n4. Trivedi, B.P., Gustatory system: The finer points of taste. 2012. 486(7403): p. S2-S3.\n5. Chiras, D.D., Human Biology. 2013: Jones & Bartlett Learning.\n6. Piggott, J., Alcoholic beverages: Sensory evaluation and consumer research. 2011. 1-491.\n7. Ross, C.F., Sensory science at the human–machine interface. Trends in Food Science & Technology, 2009. 20(2): p. 63-72.\n8. Ares, G., Methodological challenges in sensory characterization. Current Opinion in Food Science, 2015. 3: p. 1-5.\n9. Krantz, J., Experiencing Sensation and Perception. 2012: Pearson Education, Limited.\n10. Stets, J.E., Emotions and Sentiments, in Handbook of Social Psychology, J. Delamater, Editor. 2006, Springer US: Boston, MA. p. 309-335.\n11. Hu, X., K. Choi, and J.S. Downie, A framework for evaluating multimodal music mood classification. Journal of the Association for Information Science and Technology, 2017. 68(2): p. 273-285.\n12. Hu, X. and J. Downie, When Lyrics Outperform Audio for Music Mood Classification: A Feature Analysis. 2010. 619-624.\n13. Baltrusaitis, T., C. Ahuja, and L.-P. Morency, Multimodal Machine Learning: A Survey and Taxonomy. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2019. 41(2): p. 423-443.\n14. Flanagin, A.J. and M.J. Metzger, Trusting expert- versus user-generated ratings online: The role of information volume, valence, and consumer characteristics. Computers in Human Behavior, 2013. 29(4): p. 1626-1634.\n15. Parikh, A.A., et al., Comparative content analysis of professional, semi-professional, and user-generated restaurant reviews. 2016: p. 1-15.\n16. Minim, V.P.R., et al., Optimized Descriptive Profile: A rapid methodology for sensory description. Food Quality and Preference, 2012. 24(1): p. 190-200.\n17. Murray, J.M., C.M. Delahunty, and I.A. Baxter, Descriptive sensory analysis: past, present and future. Food Research International, 2001. 34(6): p. 461-471.\n18. Valentin, D., et al., Quick and dirty but still pretty good: a review of new descriptive methods in food science. International Journal of Food Science and Technology, 2012. 47(8): p. 1563-1578.\n19. Granitto, P.M., et al., Modern data mining tools in descriptive sensory analysis: A case study with a Random forest approach. Food Quality and Preference, 2007. 18(4): p. 681-689.\n20. Tao, J. and T. Tan, Affective Computing: A Review. 2005, Springer Berlin Heidelberg. p. 981-995.\n21. Cai, G. and B. Xia, Convolutional Neural Networks for Multimedia Sentiment Analysis. 2015, Springer International Publishing. p. 159-167.\n22. Purwins, H., et al., Deep Learning for Audio Signal Processing. Vol. 13. 2019.\n23. 楊子萲, 應用深度學習架構於社群網路資料分析:以Twitter圖文資料為例, in 資訊科學系. 2018, 國立政治大學. p. 73.\n24. 沈昱成, 基於社群媒體情感分析歸納產品屬性優缺點, in 資訊工程學系. 2016, 國立成功大學: 台南市. p. 44.\n25. Zhang, L., S. Wang, and B. Liu, Deep Learning for Sentiment Analysis : A Survey. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 2018.\n26. 陳禔多, 基於歌詞文本分析技術探討音樂情緒辨識之方法研究 Exploring Music Emotion Recognition via Textual Analysis on Song Lyrics. 2017.\n27. Ortigosa-Hernández, J., et al., Approaching Sentiment Analysis by using semi-supervised learning of multi-dimensional classifiers. Neurocomputing, 2012. 92: p. 98-115.\n28. Levy, O. and Y. Goldberg, Neural word embedding as implicit matrix factorization, in Proceedings of the 27th International Conference on Neural Information Processing Systems - Volume 2. 2014, MIT Press: Montreal, Canada. p. 2177–2185.\n29. Firat, O., K. Cho, and Y. Bengio. Multi-Way, Multilingual Neural Machine Translation with a Shared Attention Mechanism. in Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. 2016. San Diego, California: Association for Computational Linguistics.\n30. 李孟. 淺談神經機器翻譯 & 用 Transformer 與 TensorFlow 2 英翻中. 2019 [cited 2020 June, 9]; Available from: https://leemeng.tw/neural-machine-translation-with-transformer-and-tensorflow2.html.\n31. Devlin, J., et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. in Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). 2019. Minneapolis, Minnesota: Association for Computational Linguistics.\n32. 李孟. 進擊的 BERT:NLP 界的巨人之力與遷移學習. 2019 [cited 2020 June, 9]; Available from: https://leemeng.tw/attack_on_bert_transfer_learning_in_nlp.html.\n33. Yang, Z., et al., XLNet: Generalized Autoregressive Pretraining for Language Understanding. 2019.\n34. WenWei, K. 2019-NLP最強模型: XLNet. 2019 [cited 2020 June, 9]; Available from: https://medium.com/ai-academy-taiwan/2019-nlp%E6%9C%80%E5%BC%B7%E6%A8%A1%E5%9E%8B-xlnet-ac728b400de3.\n35. Mehra, N.K. and S. Gupta. Survey on Multiclass Classification Methods. 2013.\n36. Salman, R. and V. Kecman. Regression as classification. 2012. IEEE.\n37. The Yelp Restaurant Review. [cited 2020 June, 9]; Available from: https://www.yelp.com/dataset/.\n38. Distiller. [cited 2020 March, 30]; Available from: https://distiller.com/.\n39. Yu, N. and S. Kubler. Semi-supervised Learning for Opinion Detection. 2010. IEEE.\n40. Banned Word List. 2009 [cited 2020 July, 7]; Available from: http://www.bannedwordlist.com/.\n41. Wine, W.F.a. Describing Food. [cited 2020 June, 9]; Available from: https://world-food-and-wine.com/describing-food.\n42. Wishart, D., The flavour of whisky. 2009. 6(1): p. 20-26.\n43. How to Write a Menu Describing Your Food. 2020 Feb, 11 2020 [cited 2020 July, 7]; Available from: https://www.webstaurantstore.com/article/53/how-to-write-a-menu.html.\n44. Rajapakse, T. Simple Transformers — Multi-Class Text Classification with BERT, RoBERTa, XLNet, XLM, and DistilBERT. 2019 [cited 2020 June, 9]; Available from: https://medium.com/swlh/simple-transformers-multi-class-text-classification-with-bert-roberta-xlnet-xlm-and-8b585000ce3a.\n45. Transformers. 2020 [cited 2020 June, 9]; Available from: https://huggingface.co/transformers/.\n46. 3.3. Metrics and scoring: quantifying the quality of predictions. 2020 [cited 2020 2020, Aug 16]; Available from: https://scikit-learn.org/stable/modules/model_evaluation.html#label-ranking-average-precision.\n47. Afonja, T. Accuracy Paradox. 2017 Dec, 8 [cited 2020 July, 9]; Available from: https://towardsdatascience.com/accuracy-paradox-897a69e2dd9b.\n48. Gorodkin, J., Comparing two K-category assignments by a K-category correlation coefficient. Computational Biology and Chemistry, 2004. 28(5): p. 367-374.\n49. Ares, G., et al., Evaluation of a rating-based variant of check-all-that-apply questions: Rate-all-that-apply (RATA). Food Quality and Preference, 2014. 36: p. 87-95.\n50. Ares, G. and P. Varela, Trained vs. consumer panels for analytical testing: Fueling a long lasting debate in the field. Food Quality and Preference, 2017. 61: p. 79-86.\n51. Meyners, M., S. Jaeger, and G. Ares, On the analysis of Rate-All-That-Apply (RATA) data. Food Quality and Preference, 2015. 49.\n52. Cosine similarity. 2020 2020, Aug 10 [cited 2020 2020, Aug 14]; Available from: https://en.wikipedia.org/wiki/Cosine_similarity.\n53. 6.8. Pairwise metrics, Affinities and Kernels. 2020 [cited 2020 2020, Aug 16]; Available from: https://scikit-learn.org/stable/modules/metrics.html#cosine-similarity.\n54. pingouin.mwu. 2020 [cited 2020 July, 12]; Available from: https://pingouin-stats.org/generated/pingouin.mwu.html.\n55. Mann–Whitney U test. 2020 [cited 2020 July, 8]; Available from: https://en.wikipedia.org/wiki/Mann%E2%80%93Whitney_U_test.\n56. scipy.stats.spearmanr. 2020 2020, July 23 [cited 2020 2020, Aug 15]; Available from: https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.spearmanr.html.\n57. Spearman`s rank correlation coefficient. 2020 2020, July 10 [cited 2020 2020, Aug 15]; Available from: https://en.wikipedia.org/wiki/Spearman%27s_rank_correlation_coefficient.\n58. Rajapakse, T. Simple Transformers — Introducing The Easiest Way To Use BERT, RoBERTa, XLNet, and XLM. 2019 [cited 2020 June, 9]; Available from: https://towardsdatascience.com/simple-transformers-introducing-the-easiest-bert-roberta-xlnet-and-xlm-library-58bf8c59b2a3.\n59. McNemar`s test. 2020 2020, June 12 [cited 2020 2020, Aug 14]; Available from: https://en.wikipedia.org/wiki/McNemar%27s_test.\n60. Model: xlnet-base-cased. 2019 [cited 2020 2020, Aug 17]; Available from: https://huggingface.co/xlnet-base-cased.zh_TW
dc.identifier.doi10.6814/NCCU202001553en_US
item.fulltextWith Fulltext-
item.openairecristypehttp://purl.org/coar/resource_type/c_46ec-
item.openairetypethesis-
item.grantfulltextopen-
item.cerifentitytypePublications-
Appears in Collections:學位論文
學位論文
Files in This Item:
File Description SizeFormat
201201.pdf2.28 MBAdobe PDF2View/Open
Show simple item record

Google ScholarTM

Check

Altmetric

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.