Publications-Proceedings
Article View/Open
Publication Export
Google ScholarTM
NCCU Library
Citation Infomation
-
No data in Web of Science(Wrong one)Loading...
Related Publications in TAIR
Title | Numeracy-600K: Learning Numeracy for Detecting Exaggerated Information in Market Comments |
Creator | 黃瀚萱 Huang, Hen-Hsen Chen, Chung-Chi Takamura, Hiroya Chen, Hsin-Hsi |
Contributor | 資科系 |
Date | 2019-07 |
Date Issued | 4-Jun-2021 14:43:21 (UTC+8) |
Summary | In this paper, we attempt to answer the question of whether neural network models can learn numeracy, which is the ability to predict the magnitude of a numeral at some specific position in a text description. A large benchmark dataset, called Numeracy-600K, is provided for the novel task. We explore several neural network models including CNN, GRU, BiGRU, CRNN, CNN-capsule, GRU-capsule, and BiGRU-capsule in the experiments. The results show that the BiGRU model gets the best micro-averaged F1 score of 80.16%, and the GRU-capsule model gets the best macroaveraged F1 score of 64.71%. Besides discussing the challenges through comprehensive experiments, we also present an important application scenario, i.e., detecting exaggerated information, for the task. |
Relation | Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, ACL, pp.6307–6313 |
Type | conference |
DOI | http://dx.doi.org/10.18653/v1/P19-1635 |
dc.contributor | 資科系 | |
dc.creator (作者) | 黃瀚萱 | |
dc.creator (作者) | Huang, Hen-Hsen | |
dc.creator (作者) | Chen, Chung-Chi | |
dc.creator (作者) | Takamura, Hiroya | |
dc.creator (作者) | Chen, Hsin-Hsi | |
dc.date (日期) | 2019-07 | |
dc.date.accessioned | 4-Jun-2021 14:43:21 (UTC+8) | - |
dc.date.available | 4-Jun-2021 14:43:21 (UTC+8) | - |
dc.date.issued (上傳時間) | 4-Jun-2021 14:43:21 (UTC+8) | - |
dc.identifier.uri (URI) | http://nccur.lib.nccu.edu.tw/handle/140.119/135528 | - |
dc.description.abstract (摘要) | In this paper, we attempt to answer the question of whether neural network models can learn numeracy, which is the ability to predict the magnitude of a numeral at some specific position in a text description. A large benchmark dataset, called Numeracy-600K, is provided for the novel task. We explore several neural network models including CNN, GRU, BiGRU, CRNN, CNN-capsule, GRU-capsule, and BiGRU-capsule in the experiments. The results show that the BiGRU model gets the best micro-averaged F1 score of 80.16%, and the GRU-capsule model gets the best macroaveraged F1 score of 64.71%. Besides discussing the challenges through comprehensive experiments, we also present an important application scenario, i.e., detecting exaggerated information, for the task. | |
dc.format.extent | 470075 bytes | - |
dc.format.mimetype | application/pdf | - |
dc.relation (關聯) | Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, ACL, pp.6307–6313 | |
dc.title (題名) | Numeracy-600K: Learning Numeracy for Detecting Exaggerated Information in Market Comments | |
dc.type (資料類型) | conference | |
dc.identifier.doi (DOI) | 10.18653/v1/P19-1635 | |
dc.doi.uri (DOI) | http://dx.doi.org/10.18653/v1/P19-1635 |