學術產出-Periodical Articles

Article View/Open

Publication Export

Google ScholarTM

政大圖書館

Citation Infomation

題名 Learning English–Chinese bilingual word representations from sentence-aligned parallel corpus
作者 黃瀚萱*
Chen, Hsin-Hsi
Yen, An-Zi
Chen, Hsin-Hsi
貢獻者 資科系
關鍵詞 Cross-lingual applications ; Distributed word representation ; Word alignment
日期 2019-07
上傳時間 5-Mar-2020 14:40:57 (UTC+8)
摘要 Representation of words in different languages is fundamental for various cross-lingual applications. In the past researches, there was an argument in using or not using word alignment in learning bilingual word representations. This paper presents a comprehensive empirical study on the uses of parallel corpus to learn the word representations in the embedding space. Various nonalignment and alignment approaches are explored to formulate the contexts for Skip-gram modeling. In the approaches without word alignment, concatenating A and B, concatenating B and A, interleaving A with B, shuffling A and B, and using A and B separately are considered, where A and B denote parallel sentences in two languages. In the approaches with word alignment, three word alignment tools, including GIZA++, TsinghuaAligner, and fast_align, are employed to align words in sentences A and B. The effects of alignment direction from A to B or from B to A are also discussed. To deal with the unaligned words in the word alignment approach, two alternatives, using the words aligned with their immediate neighbors and using the words in the interleaving approach, are explored. We evaluate the performance of the adopted approaches in four tasks, including bilingual dictionary induction, cross-lingual information retrieval, cross-lingual analogy reasoning, and cross-lingual word semantic relatedness. These tasks cover the issues of translation, reasoning, and information access. Experimental results show the word alignment approach with conditional interleaving achieves the best performance in most of the tasks. 2019 Elsevier Ltd. All rights reserved.
關聯 Computer Speech & Language, Vol.56, pp.52-72
資料類型 article
DOI https://doi.org/10.1016/j.csl.2019.01.002
dc.contributor 資科系
dc.creator (作者) 黃瀚萱*
dc.creator (作者) Chen, Hsin-Hsi
dc.creator (作者) Yen, An-Zi
dc.creator (作者) Chen, Hsin-Hsi
dc.date (日期) 2019-07
dc.date.accessioned 5-Mar-2020 14:40:57 (UTC+8)-
dc.date.available 5-Mar-2020 14:40:57 (UTC+8)-
dc.date.issued (上傳時間) 5-Mar-2020 14:40:57 (UTC+8)-
dc.identifier.uri (URI) http://nccur.lib.nccu.edu.tw/handle/140.119/129119-
dc.description.abstract (摘要) Representation of words in different languages is fundamental for various cross-lingual applications. In the past researches, there was an argument in using or not using word alignment in learning bilingual word representations. This paper presents a comprehensive empirical study on the uses of parallel corpus to learn the word representations in the embedding space. Various nonalignment and alignment approaches are explored to formulate the contexts for Skip-gram modeling. In the approaches without word alignment, concatenating A and B, concatenating B and A, interleaving A with B, shuffling A and B, and using A and B separately are considered, where A and B denote parallel sentences in two languages. In the approaches with word alignment, three word alignment tools, including GIZA++, TsinghuaAligner, and fast_align, are employed to align words in sentences A and B. The effects of alignment direction from A to B or from B to A are also discussed. To deal with the unaligned words in the word alignment approach, two alternatives, using the words aligned with their immediate neighbors and using the words in the interleaving approach, are explored. We evaluate the performance of the adopted approaches in four tasks, including bilingual dictionary induction, cross-lingual information retrieval, cross-lingual analogy reasoning, and cross-lingual word semantic relatedness. These tasks cover the issues of translation, reasoning, and information access. Experimental results show the word alignment approach with conditional interleaving achieves the best performance in most of the tasks. 2019 Elsevier Ltd. All rights reserved.
dc.format.extent 2908432 bytes-
dc.format.mimetype application/pdf-
dc.relation (關聯) Computer Speech & Language, Vol.56, pp.52-72
dc.subject (關鍵詞) Cross-lingual applications ; Distributed word representation ; Word alignment
dc.title (題名) Learning English–Chinese bilingual word representations from sentence-aligned parallel corpus
dc.type (資料類型) article
dc.identifier.doi (DOI) 10.1016/j.csl.2019.01.002
dc.doi.uri (DOI) https://doi.org/10.1016/j.csl.2019.01.002