Please use this identifier to cite or link to this item:

Title: Numeracy-600K: Learning Numeracy for Detecting Exaggerated Information in Market Comments
Authors: 黃瀚萱
Huang, Hen-Hsen
Chen, Chung-Chi
Takamura, Hiroya
Chen, Hsin-Hsi
Contributors: 資科系
Date: 2019-07
Issue Date: 2021-06-04 14:43:21 (UTC+8)
Abstract: In this paper, we attempt to answer the question of whether neural network models can learn numeracy, which is the ability to predict the magnitude of a numeral at some specific position in a text description. A large benchmark dataset, called Numeracy-600K, is provided for the novel task. We explore several neural network models including CNN, GRU, BiGRU, CRNN, CNN-capsule, GRU-capsule, and BiGRU-capsule in the experiments. The results show that the BiGRU model gets the best micro-averaged F1 score of 80.16%, and the GRU-capsule model gets the best macroaveraged F1 score of 64.71%. Besides discussing the challenges through comprehensive experiments, we also present an important application scenario, i.e., detecting exaggerated information, for the task.
Relation: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, ACL, pp.6307–6313
Data Type: conference
DOI 連結:
Appears in Collections:[資訊科學系] 會議論文

Files in This Item:

File Description SizeFormat
286.pdf459KbAdobe PDF52View/Open

All items in 學術集成 are protected by copyright, with all rights reserved.

社群 sharing