學術產出-Theses

Article View/Open

Publication Export

Google ScholarTM

政大圖書館

Citation Infomation

題名 多次 LDA 主題模型及文本探勘應用於多種中文語料
Multiple LDA of Topic Modeling and Text Mining Used in Chinese Corpora
作者 陳瑀芋
Chen, Yu-Yu
貢獻者 劉昭麟
Liu, Chao-Lin
陳瑀芋
Chen, Yu-Yu
關鍵詞 主題模型
文本探勘
Topic Modeling
LDA
日期 2021
上傳時間 1-Nov-2021 12:00:09 (UTC+8)
摘要 現今的文本資料數量越來越大,文字探勘分析也越趨重要。社會科學學者需要從大量資料中擷取資訊,並從大量文本中執行相對應的研究,因而衍伸出數位人文研究領域。本研究主要目的為藉由主題模型來找出文本內主要由哪一些主題構成;並對有興趣的主題之下的資訊進行文字探勘,並提供視覺化結果,從而幫助人文學者有效率得進行相關領域的研究。
針對主題模型,我們提出了多次LDA的方法,亦針對不同語料,嘗試並比較不同主題模型,結果顯示多次LDA有效提升了主題模型的效果。
本研究以輔助社會科學學者研究為出發點,藉由比較不同主題分析模型,使用者得以選擇效果較佳的模型並快速得知文本內的主要主題。另外使用者可以將這些有興趣的文本,進一步進行相關分析,減少學者閱讀大量文本所需耗費之時間。
我們利用《人民大學期刊資料庫》、《維基百科》作為實驗與測試的中文語料,且將分析後的結果提供給學者,作為分析詮釋的參考資訊與佐證依據。
Nowadays, the number of textual materials is getting larger and larger, and text exploration and analysis are becoming more and more important. Social science scholars need to extract information from a large amount of data and perform corresponding research from a large amount of text, thus extending the field of Digital Humanities research. The main purpose of this research is to find out which topics are mainly composed of the text through topic modeling; to conduct text mining of the information under the topics of interest, and to provide visualization results, so as to help the humanities scholars to efficiently conduct research in related fields.
This research starts from assisting the research of social science scholars. By comparing different topic analysis models, users can choose the best model and quickly learn the main topics in the text. In addition, users can further analyze these interesting texts, reducing the time it takes for scholars to read a large number of texts.
For topic models, we proposed the multiple LDA methods. We also tried and compared different topic models with different corpora. The result shows that multiple LDA effectively improved the effect of topic models.
We use "Renmin University Journal Database" and "Wikipedia" as the Chinese corpus for experiments and tests, and provide the analysis results to scholars as reference information and supporting evidence for analysis and interpretation.
參考文獻 [1] 金觀濤、邱偉雲、梁穎誼、陳柏聿、沈錳坤、及劉青峰。觀念群變化的數位 人文研究-以《新青年》為例,2014 第五屆數位典藏與數位人文國際研討會,臺灣,2014。 

[2] 金觀濤、邱偉雲及劉昭麟。「共現」詞頻分析及其運用——以「華人」觀念起源為例。數位人文要義:尋找類型與軌跡,141-170,國立台灣大學出版中心。
[3] 劉昭麟、金觀濤、劉青峰、邱偉雲、及姚育松。自然語言處理技術於中文史 學文獻分析之初步應用,2011 第三屆數位典藏與數位人文國際研討會論文 集,151-168,臺灣,2011。
[4] David M. Blei, Andrew Y. Ng, and Michael I. Jordan. 2003. Latent Dirichlet allocation. Journal of Machine Learning Research, 3(Jan):993–1022.
[5] Bastian M., Heymann S., Jacomy M. 2009. Gephi: an open source software for exploring and manipulating networks. International AAAI Conference on Weblogs and Social Media.
[6] Di Jiang, Yuanfeng Song, Rongzhong Lian, Siqi Bao, Jinhua Peng and Huang He and Hua Wu. 2018. Familia: A Configurable Topic Modeling Framework for Industrial Text Engineering. arXiv: 1808.03733.
[7] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers).
[8] Leland McInnes, John Healy, James Melville. 2020. UMAP: Uniform Manifold Approximation and Projection for Dimension Reduction. arXiv:1802.03426
[9] McInnes, Leland and Healy, John and Astels, Steve. 2017. hdbscan: Hierarchical density based clustering. The Journal of Open Source Software. 2. 10.21105/joss.00205
[10] Maarten Grootendorst, & Nils Reimers. (2021). MaartenGr/BERTopic: v0.9.1 (v0.9.1). Zenodo. https://doi.org/10.5281/zenodo.5353198
[11] Newman, David & Lau, Jey & Grieser, Karl & Baldwin, Timothy. 2010. Automatic Evaluation of Topic Coherence.. Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the ACL. 100-108.
[12] Röder Michael, Both Andreas and Hinneburg Alexander. 2015. Exploring the Space of Topic Coherence Measures. WSDM 2015 - Proceedings of the 8th ACM International Conference on Web Search and Data Mining. 399-408. 10.1145/2684822.2685324.
[13] Sievert, Carson and Shirley, Kenneth. 2014. LDAvis: A method for visualizing and interpreting topics. In Proceedings of the Workshop on Interactive Language Learning, Visualization, and Interfaces.
[14] 中國人民大學複印報刊資料 https://ipub.exuezhe.com/index.html。
[15] WikipediaAPI https://www.mediawiki.org/wiki/API:Main_page
[16] 人民日報 http://www.people.com.cn/
[17] 結巴中文分詞 https://github.com/fxsjy/jieba。
[18] OpenCC https://github.com/BYVoid/OpenCC
[19] 搜狗網 https://pinyin.sogou.com/dict/
[20] Familia https://github.com/baidu/Familia
[21] Flair https://github.com/flairNLP/flair
[22] HuggingFace https://huggingface.co/
[23]中文情感分析庫 https://github.com/thunderhit/cnsenti
描述 碩士
國立政治大學
資訊科學系
108753144
資料來源 http://thesis.lib.nccu.edu.tw/record/#G0108753144
資料類型 thesis
dc.contributor.advisor 劉昭麟zh_TW
dc.contributor.advisor Liu, Chao-Linen_US
dc.contributor.author (Authors) 陳瑀芋zh_TW
dc.contributor.author (Authors) Chen, Yu-Yuen_US
dc.creator (作者) 陳瑀芋zh_TW
dc.creator (作者) Chen, Yu-Yuen_US
dc.date (日期) 2021en_US
dc.date.accessioned 1-Nov-2021 12:00:09 (UTC+8)-
dc.date.available 1-Nov-2021 12:00:09 (UTC+8)-
dc.date.issued (上傳時間) 1-Nov-2021 12:00:09 (UTC+8)-
dc.identifier (Other Identifiers) G0108753144en_US
dc.identifier.uri (URI) http://nccur.lib.nccu.edu.tw/handle/140.119/137675-
dc.description (描述) 碩士zh_TW
dc.description (描述) 國立政治大學zh_TW
dc.description (描述) 資訊科學系zh_TW
dc.description (描述) 108753144zh_TW
dc.description.abstract (摘要) 現今的文本資料數量越來越大,文字探勘分析也越趨重要。社會科學學者需要從大量資料中擷取資訊,並從大量文本中執行相對應的研究,因而衍伸出數位人文研究領域。本研究主要目的為藉由主題模型來找出文本內主要由哪一些主題構成;並對有興趣的主題之下的資訊進行文字探勘,並提供視覺化結果,從而幫助人文學者有效率得進行相關領域的研究。
針對主題模型,我們提出了多次LDA的方法,亦針對不同語料,嘗試並比較不同主題模型,結果顯示多次LDA有效提升了主題模型的效果。
本研究以輔助社會科學學者研究為出發點,藉由比較不同主題分析模型,使用者得以選擇效果較佳的模型並快速得知文本內的主要主題。另外使用者可以將這些有興趣的文本,進一步進行相關分析,減少學者閱讀大量文本所需耗費之時間。
我們利用《人民大學期刊資料庫》、《維基百科》作為實驗與測試的中文語料,且將分析後的結果提供給學者,作為分析詮釋的參考資訊與佐證依據。
zh_TW
dc.description.abstract (摘要) Nowadays, the number of textual materials is getting larger and larger, and text exploration and analysis are becoming more and more important. Social science scholars need to extract information from a large amount of data and perform corresponding research from a large amount of text, thus extending the field of Digital Humanities research. The main purpose of this research is to find out which topics are mainly composed of the text through topic modeling; to conduct text mining of the information under the topics of interest, and to provide visualization results, so as to help the humanities scholars to efficiently conduct research in related fields.
This research starts from assisting the research of social science scholars. By comparing different topic analysis models, users can choose the best model and quickly learn the main topics in the text. In addition, users can further analyze these interesting texts, reducing the time it takes for scholars to read a large number of texts.
For topic models, we proposed the multiple LDA methods. We also tried and compared different topic models with different corpora. The result shows that multiple LDA effectively improved the effect of topic models.
We use "Renmin University Journal Database" and "Wikipedia" as the Chinese corpus for experiments and tests, and provide the analysis results to scholars as reference information and supporting evidence for analysis and interpretation.
en_US
dc.description.tableofcontents 1 緒論 1
1.1 研究動機 1
1.2 研究目的 1
1.3 主要貢獻 1
1.4 論文架構 2
2 文獻回顧 3
2.1 Topic Modeling相關研究 3
2.2 主題連貫性相關研究 3
2.3 文本分析相關研究 3
3 研究方法 4
3.1 實驗架構 4
3.2 實驗資料 6
3.2.1 中國人民大學期刊資料庫 6
3.2.2 維基百科資料 7
3.3 資料前處理 9
3.3.1 擷取資料 9
3.3.2 斷詞 10
3.4 Topic Modeling主題模型 12
3.4.1 單次LDA 12
3.4.2 多次LDA 12
3.4.3 使用預訓練新聞主題模型(Familia) 17
3.4.4 使用BERT訓練主題模型 18
3.5 文本分析 18
3.5.1 TF-IDF 分析 18
3.5.2 共現詞頻分析 19
3.5.3 情緒分析 19
4 實驗結果 20
4.1 實驗結果評估方法 20
4.1.1 單次LDA 20
4.1.2 多次LDA 22
4.2 主題模型參數實驗 22
4.2.1 實驗設計 22
4.2.2 結果分析 23
4.3 人民大學期刊資料庫主題模型 27
4.3.1 資料概述 27
4.3.2 單次LDA模型 27
4.3.3 多次LDA模型 37
4.3.4 使用預訓練新聞主題模型(Familia) 43
4.3.5 使用BERT訓練主題模型 43
4.4 維基百科資料一主題模型—三分類 44
4.4.1 資料概述 44
4.4.2 單次LDA模型 44
4.4.3 多次LDA模型 49
4.4.4 使用預訓練新聞主題模型(Familia) 55
4.4.5 使用BERT訓練主題模型 55
4.5 維基百科資料一主題模型-九分類 56
4.5.1 資料概述 56
4.5.2 單次LDA模型 56
4.5.3 多次LDA模型 61
4.5.4 使用預訓練新聞主題模型(Familia) 69
4.5.5 使用BERT訓練主題模型 69
4.6 維基百科資料二主題模型-三分類 71
4.6.1 資料概述 71
4.6.2 單次LDA模型 71
4.6.3 多次LDA模型 75
4.6.4 使用預訓練新聞主題模型(Familia) 81
4.6.5 使用BERT訓練主題模型 81
4.7 維基百科資料二主題模型-九分類 82
4.7.1 資料概述 82
4.7.2 單次LDA模型 82
4.7.3 多次LDA模型 87
4.7.4 使用預訓練新聞主題模型(Familia) 95
4.7.5 使用BERT訓練主題模型 95
4.8 主題模型綜合比較與討論 96
4.9 文本分析結果與討論 99
4.9.1 實驗資料 99
4.9.2 TF-IDF分析 99
4.9.3 共現詞頻分析 101
4.9.4 情緒分析 103
5 結論 103
6 未來研究方向 104
參考文獻 105
附錄A 論文口試紀錄 107
zh_TW
dc.format.extent 7200671 bytes-
dc.format.mimetype application/pdf-
dc.source.uri (資料來源) http://thesis.lib.nccu.edu.tw/record/#G0108753144en_US
dc.subject (關鍵詞) 主題模型zh_TW
dc.subject (關鍵詞) 文本探勘zh_TW
dc.subject (關鍵詞) Topic Modelingen_US
dc.subject (關鍵詞) LDAen_US
dc.title (題名) 多次 LDA 主題模型及文本探勘應用於多種中文語料zh_TW
dc.title (題名) Multiple LDA of Topic Modeling and Text Mining Used in Chinese Corporaen_US
dc.type (資料類型) thesisen_US
dc.relation.reference (參考文獻) [1] 金觀濤、邱偉雲、梁穎誼、陳柏聿、沈錳坤、及劉青峰。觀念群變化的數位 人文研究-以《新青年》為例,2014 第五屆數位典藏與數位人文國際研討會,臺灣,2014。 

[2] 金觀濤、邱偉雲及劉昭麟。「共現」詞頻分析及其運用——以「華人」觀念起源為例。數位人文要義:尋找類型與軌跡,141-170,國立台灣大學出版中心。
[3] 劉昭麟、金觀濤、劉青峰、邱偉雲、及姚育松。自然語言處理技術於中文史 學文獻分析之初步應用,2011 第三屆數位典藏與數位人文國際研討會論文 集,151-168,臺灣,2011。
[4] David M. Blei, Andrew Y. Ng, and Michael I. Jordan. 2003. Latent Dirichlet allocation. Journal of Machine Learning Research, 3(Jan):993–1022.
[5] Bastian M., Heymann S., Jacomy M. 2009. Gephi: an open source software for exploring and manipulating networks. International AAAI Conference on Weblogs and Social Media.
[6] Di Jiang, Yuanfeng Song, Rongzhong Lian, Siqi Bao, Jinhua Peng and Huang He and Hua Wu. 2018. Familia: A Configurable Topic Modeling Framework for Industrial Text Engineering. arXiv: 1808.03733.
[7] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers).
[8] Leland McInnes, John Healy, James Melville. 2020. UMAP: Uniform Manifold Approximation and Projection for Dimension Reduction. arXiv:1802.03426
[9] McInnes, Leland and Healy, John and Astels, Steve. 2017. hdbscan: Hierarchical density based clustering. The Journal of Open Source Software. 2. 10.21105/joss.00205
[10] Maarten Grootendorst, & Nils Reimers. (2021). MaartenGr/BERTopic: v0.9.1 (v0.9.1). Zenodo. https://doi.org/10.5281/zenodo.5353198
[11] Newman, David & Lau, Jey & Grieser, Karl & Baldwin, Timothy. 2010. Automatic Evaluation of Topic Coherence.. Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the ACL. 100-108.
[12] Röder Michael, Both Andreas and Hinneburg Alexander. 2015. Exploring the Space of Topic Coherence Measures. WSDM 2015 - Proceedings of the 8th ACM International Conference on Web Search and Data Mining. 399-408. 10.1145/2684822.2685324.
[13] Sievert, Carson and Shirley, Kenneth. 2014. LDAvis: A method for visualizing and interpreting topics. In Proceedings of the Workshop on Interactive Language Learning, Visualization, and Interfaces.
[14] 中國人民大學複印報刊資料 https://ipub.exuezhe.com/index.html。
[15] WikipediaAPI https://www.mediawiki.org/wiki/API:Main_page
[16] 人民日報 http://www.people.com.cn/
[17] 結巴中文分詞 https://github.com/fxsjy/jieba。
[18] OpenCC https://github.com/BYVoid/OpenCC
[19] 搜狗網 https://pinyin.sogou.com/dict/
[20] Familia https://github.com/baidu/Familia
[21] Flair https://github.com/flairNLP/flair
[22] HuggingFace https://huggingface.co/
[23]中文情感分析庫 https://github.com/thunderhit/cnsenti
zh_TW
dc.identifier.doi (DOI) 10.6814/NCCU202101663en_US