Please use this identifier to cite or link to this item:
https://ah.lib.nccu.edu.tw/handle/140.119/133561
DC Field | Value | Language |
---|---|---|
dc.contributor | 語言所 | |
dc.creator | 張瑜芸 | |
dc.creator | Chang, Yu-Yun | |
dc.creator | Magistry, Pierre | |
dc.creator | Hsieh, Shu-Kai | |
dc.date | 2016-12 | |
dc.date.accessioned | 2021-01-18T05:21:25Z | - |
dc.date.available | 2021-01-18T05:21:25Z | - |
dc.date.issued | 2021-01-18T05:21:25Z | - |
dc.identifier.uri | http://nccur.lib.nccu.edu.tw/handle/140.119/133561 | - |
dc.description.abstract | In this paper, we present a proposed system designed for sentiment detection for micro-blog data in Chinese. Our system surprisingly benefits from the lack of word boundary in Chinese writing system and shifts the focus directly to larger and more relevant chunks. We use an unsupervised Chinese word segmentation system and binomial test to extract specific and endogenous lexicon chunks from the training corpus. We combine the lexicon chunks with other external resources to train a maximum entropy model for document classification. With this method, we obtained an averaged F1 score of 87.2 which outperforms the state-of-the-art approach based on the released data in the second SocialNLP shared task. | |
dc.format.extent | 1137118 bytes | - |
dc.format.mimetype | application/pdf | - |
dc.relation | Lingua Sinica, Vol.2, No.1, pp.1-10 | |
dc.subject | Sentiment analysis;Emotion lexicon;Unsupervised learning | |
dc.title | Sentiment detection in micro-blogs using unsupervised chunk extraction | |
dc.type | article | |
dc.identifier.doi | 10.1186/s40655-015-0010-8 | |
dc.doi.uri | https://doi.org/10.1186/s40655-015-0010-8 | |
item.openairetype | article | - |
item.fulltext | With Fulltext | - |
item.grantfulltext | restricted | - |
item.openairecristype | http://purl.org/coar/resource_type/c_18cf | - |
item.cerifentitytype | Publications | - |
Appears in Collections: | 期刊論文 |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.