Please use this identifier to cite or link to this item: https://ah.lib.nccu.edu.tw/handle/140.119/98245
題名: Using parallel corpora to automatically generate training data for Chinese segmenters in NTCIR PatentMT task
作者: Wang, Jui-Ping;Liu, Chao-Lin
劉昭麟
貢獻者: 資科系
關鍵詞: Chinese-English Patent Machine Translation, Chinese Near Synonyms, Chinese Segmentation, Machine Learning
日期: Jun-2013
上傳時間: 22-Jun-2016
摘要: Chinese texts do not contain spaces as word separators like English and many alphabetic languages. To use Moses to train translation models, we must segment Chinese texts into sequences of Chinese words. Increasingly more software tools for Chinese segmentation are populated on the Internet in recent years. However, some of these tools were trained with general texts, so might not handle domain-specific terms in patent documents very well. Some machine-learning based tools require us to provide segmented Chinese to train segmentation models. In both cases, providing segmented Chinese texts to refine a pre-trained model or to create a new model for segmentation is an important basis for successful Chinese-English machine translation systems. Ideally, high-quality segmented texts should be created and verified by domain experts, but doing so would be quite costly. We explored an approach to algorithmically generate segmented texts with parallel texts and lexical resources. Our scores in NTCIR-10 PatentMT indeed improved from our scores in NTCIR-9 PatentMT with the new approach.
關聯: Proceedings of NTCIR-10 (NTCIR 10), 368‒372. Tokyo, Japan, 18-21 June 2013
資料類型: conference
Appears in Collections:會議論文

Files in This Item:
File Description SizeFormat
368-372.pdf295.58 kBAdobe PDF2View/Open
Show full item record

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.