Publications-Theses

Article View/Open

Publication Export

Google ScholarTM

NCCU Library

Citation Infomation

Related Publications in TAIR

題名 基於自監督學習之生成語言模型序列文本知識更新
Sequential Text-based Knowledge Update with Self-Supervised Learning for Generative Language Models
作者 宋浩茹
Sung, Hao-Ru
貢獻者 李蔡彥<br>黃瀚萱
Li, Tsai-Yen<br>Huang, Hen-Hsen
宋浩茹
Sung, Hao-Ru
關鍵詞 自然語言生成
時間知識建模
摘要更新
自監督學習
Natural Language Generation
Temporal Knowledge Modeling
Update Summarization
Self-Supervision
日期 2023
上傳時間 3-Oct-2023 10:49:40 (UTC+8)
摘要 本研究提出新的自然語言處理(NLP)任務,以解決多輪、序列式的文本知識更新問題。該研究引入了一種混合學習架構和新穎的自監督訓練策略,旨在使生成語言模型能夠像人類一樣有效地鞏固和更新知識。這種方式對於改善語言模型的學習和理解能力具有重大意義。為了驗證這種策略的有效性,我們還創建了一個新的數據集以進行評估。從實驗結果來看,我們的方法在效能上超越了現有的模型和GPT-3.5-Turbo。本研究所提出的任務和模型架構能夠提升知識組織的自動化程度,使得基於文本知識的大型語言模型(LLM),成為協助人類執行各種任務的重要資源。
This work proposes a new natural language processing (NLP) task to tackle the issue of multi-round, sequential text-based knowledge update. The study introduces a hybrid learning architecture and a novel self-supervised training strategy to enable generative language models to consolidate knowledge in the same way as humans. A dataset was also created for evaluation and results showed the effectiveness of our methodology. Experimental results confirm the superiority of the proposed approach over existing models and GPT-3.5-Turbo. The proposed task and model framework have the potential to significantly improve the automation of knowledge organization, making text-based knowledge an increasingly crucial resource for powerful large language models (LLM) to perform various tasks for humans.
參考文獻 [1] Y. T. Lee, Y. J. Tang, Y. C. Cheng, P. L. Chen, T. Y. Li, and H. H. Huang, "A Multi-grained Dataset for News Event Triggered Knowledge Update." pp. 4158-4162.
[2] S. F. Chen, and J. Goodman, "An empirical study of smoothing techniques for language modeling."
[3] L. R. Rabiner, “A tutorial on hidden Markov models and selected applications in speech recognition,” Proceedings of the IEEE, vol. 77, no. 2, pp. 257-286, 1989.
[4] R. J. Williams, and D. Zipser, “A Learning Algorithm for Continually Running Fully Recurrent Neural Networks,” Neural Computation, vol. 1, no. 2, pp. 270-280, 1989.
[5] S. Hochreiter, and J. Schmidhuber, “Long Short-Term Memory,” Neural Comput., vol. 9, no. 8, pp. 1735–1780, 1997.
[6] D. Bahdanau, K. Cho, and Y. Bengio, “Neural Machine Translation by Jointly Learning to Align and Translate,” ArXiv, vol. 1409, 09/01, 2014.
[7] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, “Attention is all you need,” in Proceedings of the 31st International Conference on Neural Information Processing Systems, Long Beach, California, USA, 2017, pp. 6000–6010.
[8] X. Liu, H.-F. Yu, I. S. Dhillon, and C.-J. Hsieh, “Learning to encode position for transformer with continuous dynamical model,” in Proceedings of the 37th International Conference on Machine Learning, 2020, pp. Article 587.
[9] X. Tannier, and V. Moriceau, "Building event threads out of multiple news articles." pp. 958-967.
[10] M. A. Hearst, S. T. Dumais, E. Osuna, J. Platt, and B. Scholkopf, “Support vector machines,” IEEE Intelligent Systems and their Applications, vol. 13, no. 4, pp. 18-28, 1998.
[11] J. Platt, Sequential Minimal Optimization: A Fast Algorithm for Training Support Vector Machines, MSR-TR-98-14 ,, Microsoft, 1998.
[12] S. Tarnpradab, F. Jafariakinabad, and K. A. Hua, “Improving Online Forums Summarization via Hierarchical Unified Deep Neural Network,” 2021.
[13] H. T. Dang, and K. Owczarzak, "Overview of the TAC 2008 update summarization task."
[14] J. Aslam, F. Diaz, M. Ekstrand-Abueg, R. McCreadie, V. Pavlu, and T. Sakai, TREC 2014 Temporal Summarization Track Overview, 2015.
[15] J. A. Aslam, M. Ekstrand-Abueg, V. Pavlu, F. Diaz, and T. Sakai, "TREC 2013 Temporal Summarization."
[16] S. Panthaplackel, A. Benton, and M. Dredze, "Updated Headline Generation: Creating Updated Summaries for Evolving News Stories." pp. 6438-6461.
[17] F. Dernoncourt, M. M. Ghassemi, and W. Chang, "A Repository of Corpora for Summarization."
[18] M. Banko, V. O. Mittal, and M. J. Witbrock, “Headline generation based on statistical translation,” in Proceedings of the 38th Annual Meeting on Association for Computational Linguistics, Hong Kong, 2000, pp. 318–325.
[19] B. Dorr, D. Zajic, and R. Schwartz, Hedge Trimmer: A Parse-and-Trim Approach to Headline Generation, 2003.
[20] K. Matsumaru, S. Takase, and N. Okazaki, “Improving Truthfulness of Headline Generation,” 2020.
[21] S. Takase, J. Suzuki, N. Okazaki, T. Hirao, and M. Nagata, "Neural headline generation on abstract meaning representation." pp. 1054-1059.
[22] D. Z. R. Schwartz, B. E. Door, and R. M. Schwartz, "Automatic Headline Generation for Newspaper Stories."
[23] Y. Liu, M. Ott, N. Goyal, J. Du, M. Joshi, D. Chen, O. Levy, M. Lewis, L. Zettlemoyer, and V. Stoyanov, “RoBERTa: A Robustly Optimized BERT Pretraining Approach,” ArXiv, vol. abs/1907.11692, 2019.
[24] W. Xiao, I. Beltagy, G. Carenini, and A. Cohan, "PRIMERA: Pyramid-based Masked Sentence Pre-training for Multi-document Summarization," Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). pp. 5245-5263.
[25] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding,” ArXiv, vol. abs/1810.04805, 2019.
[26] T. Mikolov, I. Sutskever, K. Chen, G. Corrado, and J. Dean, “Distributed representations of words and phrases and their compositionality,” in Proceedings of the 26th International Conference on Neural Information Processing Systems - Volume 2, Lake Tahoe, Nevada, 2013, pp. 3111–3119.
[27] A. Radford, and K. Narasimhan, "Improving Language Understanding by Generative Pre-Training."
[28] Z. Lan, M. Chen, S. Goodman, K. Gimpel, P. Sharma, and R. Soricut, “Albert: A lite bert for self-supervised learning of language representations,” arXiv preprint arXiv:1909.11942, 2019.
[29] M. Lewis, Y. Liu, N. Goyal, M. Ghazvininejad, A. Mohamed, O. Levy, V. Stoyanov, and L. Zettlemoyer, “Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension,” arXiv preprint arXiv:1910.13461, 2019.
[30] J. Zhang, Y. Zhao, M. Saleh, and P. J. Liu, “PEGASUS: pre-training with extracted gap-sentences for abstractive summarization,” in Proceedings of the 37th International Conference on Machine Learning, 2020, pp. Article 1051.
[31] B. Guo, Y. Gong, Y. Shen, S. Han, H. Huang, N. Duan, and W. Chen, “GENIUS: Sketch-based Language Model Pre-training via Extreme and Selective Masking for Text Generation and Augmentation,” arXiv preprint arXiv:2211.10330, 2022.
[32] R. Campos, V. Mangaravite, A. Pasquali, A. Jorge, C. Nunes, and A. Jatowt, “YAKE! Keyword extraction from single documents using multiple local features,” Information sciences, vol. 509, pp. 257-289, 2020.
[33] M. Lewis, Y. Liu, N. Goyal, M. Ghazvininejad, A. Mohamed, O. Levy, V. Stoyanov, and L. Zettlemoyer, “BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension,” arXiv pre-print server, 2019-10-29, 2019.
[34] D. G. Ghalandari, C. Hokamp, N. T. Pham, J. Glover, and G. Ifrim, “A Large-Scale Multi-Document Summarization Dataset from the Wikipedia Current Events Portal,” 2020.
[35] H. Touvron, T. Lavril, G. Izacard, X. Martinet, M.-A. Lachaux, T. e. Lacroix, B. Rozi\"ere, N. Goyal, E. Hambro, F. Azhar, A. Rodriguez, A. Joulin, E. Grave, and G. Lample, “LLaMA: Open and Efficient Foundation Language Models,” arXiv pre-print server, 2023-02-27, 2023.
[36] W.-L. Chiang, Z. Li, Z. Lin, Y. Sheng, Z. Wu, H. Zhang, L. Zheng, S. Zhuang, Y. Zhuang, J. E. Gonzalez, I. Stoica, and E. P. Xing, “Vicuna: An Open-Source Chatbot Impressing GPT-4 with 90% ChatGPT Quality,” 2023.
[37] G. Penedo, Q. Malartic, D. Hesslow, R. Cojocaru, A. Cappelli, H. Alobeidli, B. Pannier, E. Almazrouei, and J. Launay, “The RefinedWeb Dataset for Falcon LLM: Outperforming Curated Corpora with Web Data, and Web Data Only,” arXiv pre-print server, 2023-06-01, 2023.
[38] J. Wei, M. Bosma, Vincent, K. Guu, Adams, B. Lester, N. Du, Andrew, and Quoc, “Finetuned Language Models Are Zero-Shot Learners,” arXiv pre-print server, 2021-09-03, 2021.
[39] Edward, Y. Shen, P. Wallis, Z. Allen-Zhu, Y. Li, S. Wang, L. Wang, and W. Chen, “LoRA: Low-Rank Adaptation of Large Language Models,” arXiv pre-print server, 2021-10-16, 2021.
[40] C.-Y. Lin, "ROUGE: A Package for Automatic Evaluation of Summaries."
[41] K. Papineni, S. Roukos, T. Ward, and W.-J. Zhu, "Bleu: a method for automatic evaluation of machine translation." pp. 311-318.
[42] S. Banerjee, and A. Lavie, "METEOR: An automatic metric for MT evaluation with improved correlation with human judgments." pp. 65-72.
[43] T. Zhang, V. Kishore, F. Wu, K. Q. Weinberger, and Y. Artzi, “Bertscore: Evaluating text generation with bert,” arXiv preprint arXiv:1904.09675, 2019.
[44] C. Raffel, N. Shazeer, A. Roberts, K. Lee, S. Narang, M. Matena, Y. Zhou, W. Li, and P. J. Liu, “Exploring the limits of transfer learning with a unified text-to-text transformer,” Journal of machine learning research, vol. 21, 2020.
[45] Y. Li, “Deep Reinforcement Learning,” arXiv pre-print server, 2018-10-15, 2018.
描述 碩士
國立政治大學
資訊科學系
110753124
資料來源 http://thesis.lib.nccu.edu.tw/record/#G0110753124
資料類型 thesis
dc.contributor.advisor 李蔡彥<br>黃瀚萱zh_TW
dc.contributor.advisor Li, Tsai-Yen<br>Huang, Hen-Hsenen_US
dc.contributor.author (Authors) 宋浩茹zh_TW
dc.contributor.author (Authors) Sung, Hao-Ruen_US
dc.creator (作者) 宋浩茹zh_TW
dc.creator (作者) Sung, Hao-Ruen_US
dc.date (日期) 2023en_US
dc.date.accessioned 3-Oct-2023 10:49:40 (UTC+8)-
dc.date.available 3-Oct-2023 10:49:40 (UTC+8)-
dc.date.issued (上傳時間) 3-Oct-2023 10:49:40 (UTC+8)-
dc.identifier (Other Identifiers) G0110753124en_US
dc.identifier.uri (URI) http://nccur.lib.nccu.edu.tw/handle/140.119/147747-
dc.description (描述) 碩士zh_TW
dc.description (描述) 國立政治大學zh_TW
dc.description (描述) 資訊科學系zh_TW
dc.description (描述) 110753124zh_TW
dc.description.abstract (摘要) 本研究提出新的自然語言處理(NLP)任務,以解決多輪、序列式的文本知識更新問題。該研究引入了一種混合學習架構和新穎的自監督訓練策略,旨在使生成語言模型能夠像人類一樣有效地鞏固和更新知識。這種方式對於改善語言模型的學習和理解能力具有重大意義。為了驗證這種策略的有效性,我們還創建了一個新的數據集以進行評估。從實驗結果來看,我們的方法在效能上超越了現有的模型和GPT-3.5-Turbo。本研究所提出的任務和模型架構能夠提升知識組織的自動化程度,使得基於文本知識的大型語言模型(LLM),成為協助人類執行各種任務的重要資源。zh_TW
dc.description.abstract (摘要) This work proposes a new natural language processing (NLP) task to tackle the issue of multi-round, sequential text-based knowledge update. The study introduces a hybrid learning architecture and a novel self-supervised training strategy to enable generative language models to consolidate knowledge in the same way as humans. A dataset was also created for evaluation and results showed the effectiveness of our methodology. Experimental results confirm the superiority of the proposed approach over existing models and GPT-3.5-Turbo. The proposed task and model framework have the potential to significantly improve the automation of knowledge organization, making text-based knowledge an increasingly crucial resource for powerful large language models (LLM) to perform various tasks for humans.en_US
dc.description.tableofcontents 致謝 I
摘要 II
Abstract III
目錄 IV
圖目錄 VI
表目錄 VII
第一章 緒論 1
1.1 研究背景與動機 1
1.2 研究目標 2
1.3 預期貢獻 3
1.4 本論文之架構 4
第二章 文獻回顧 5
2.1 生成式語言模型 5
2.2 文本更新任務 9
2.3 自監督學習任務 12
2.4 小結 15
第三章 研究方法 16
3.1 研究架構與設計 16
3.2 資料來源 18
3.2.1 WCEP資料集 18
3.2.2 NetKu資料集 19
3.3 資料集建構 20
3.3.1 模型預訓練資料處理 20
3.3.2 模型微調資料處理 21
3.4 混合學習架構 (Hybrid Training Framework) 23
3.4.1 學生訓練階段 (Student Training Stage) 23
3.4.2 教師訓練階段 (Teacher Training Stage) 25
3.5 自監督訓練策略 (Self-supervised Pre-training) 26
第四章 實驗方法與結果 29
4.1 實驗目標 29
4.2 實驗對象 29
4.3 實驗方法 31
4.4 評估指標 33
4.5 實驗結果效能分析 34
4.5.1 單策略自監督訓練效能分析 34
4.5.2 融合策略自監督訓練效能分析 36
4.5.3 LLMs Zero-Shot效能分析 38
4.5.4 LLMs Fine-Tuning效能分析 39
4.6 實驗結果案例分析 40
4.6.1 優勢案例分析 40
4.6.2 挑戰案例分析 45
第五章 結論與未來展望 54
5.1 研究結論 54
5.2 研究限制與未來發展 54
5.3 實際應用 55
References 57
zh_TW
dc.format.extent 3403829 bytes-
dc.format.mimetype application/pdf-
dc.source.uri (資料來源) http://thesis.lib.nccu.edu.tw/record/#G0110753124en_US
dc.subject (關鍵詞) 自然語言生成zh_TW
dc.subject (關鍵詞) 時間知識建模zh_TW
dc.subject (關鍵詞) 摘要更新zh_TW
dc.subject (關鍵詞) 自監督學習zh_TW
dc.subject (關鍵詞) Natural Language Generationen_US
dc.subject (關鍵詞) Temporal Knowledge Modelingen_US
dc.subject (關鍵詞) Update Summarizationen_US
dc.subject (關鍵詞) Self-Supervisionen_US
dc.title (題名) 基於自監督學習之生成語言模型序列文本知識更新zh_TW
dc.title (題名) Sequential Text-based Knowledge Update with Self-Supervised Learning for Generative Language Modelsen_US
dc.type (資料類型) thesisen_US
dc.relation.reference (參考文獻) [1] Y. T. Lee, Y. J. Tang, Y. C. Cheng, P. L. Chen, T. Y. Li, and H. H. Huang, "A Multi-grained Dataset for News Event Triggered Knowledge Update." pp. 4158-4162.
[2] S. F. Chen, and J. Goodman, "An empirical study of smoothing techniques for language modeling."
[3] L. R. Rabiner, “A tutorial on hidden Markov models and selected applications in speech recognition,” Proceedings of the IEEE, vol. 77, no. 2, pp. 257-286, 1989.
[4] R. J. Williams, and D. Zipser, “A Learning Algorithm for Continually Running Fully Recurrent Neural Networks,” Neural Computation, vol. 1, no. 2, pp. 270-280, 1989.
[5] S. Hochreiter, and J. Schmidhuber, “Long Short-Term Memory,” Neural Comput., vol. 9, no. 8, pp. 1735–1780, 1997.
[6] D. Bahdanau, K. Cho, and Y. Bengio, “Neural Machine Translation by Jointly Learning to Align and Translate,” ArXiv, vol. 1409, 09/01, 2014.
[7] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, “Attention is all you need,” in Proceedings of the 31st International Conference on Neural Information Processing Systems, Long Beach, California, USA, 2017, pp. 6000–6010.
[8] X. Liu, H.-F. Yu, I. S. Dhillon, and C.-J. Hsieh, “Learning to encode position for transformer with continuous dynamical model,” in Proceedings of the 37th International Conference on Machine Learning, 2020, pp. Article 587.
[9] X. Tannier, and V. Moriceau, "Building event threads out of multiple news articles." pp. 958-967.
[10] M. A. Hearst, S. T. Dumais, E. Osuna, J. Platt, and B. Scholkopf, “Support vector machines,” IEEE Intelligent Systems and their Applications, vol. 13, no. 4, pp. 18-28, 1998.
[11] J. Platt, Sequential Minimal Optimization: A Fast Algorithm for Training Support Vector Machines, MSR-TR-98-14 ,, Microsoft, 1998.
[12] S. Tarnpradab, F. Jafariakinabad, and K. A. Hua, “Improving Online Forums Summarization via Hierarchical Unified Deep Neural Network,” 2021.
[13] H. T. Dang, and K. Owczarzak, "Overview of the TAC 2008 update summarization task."
[14] J. Aslam, F. Diaz, M. Ekstrand-Abueg, R. McCreadie, V. Pavlu, and T. Sakai, TREC 2014 Temporal Summarization Track Overview, 2015.
[15] J. A. Aslam, M. Ekstrand-Abueg, V. Pavlu, F. Diaz, and T. Sakai, "TREC 2013 Temporal Summarization."
[16] S. Panthaplackel, A. Benton, and M. Dredze, "Updated Headline Generation: Creating Updated Summaries for Evolving News Stories." pp. 6438-6461.
[17] F. Dernoncourt, M. M. Ghassemi, and W. Chang, "A Repository of Corpora for Summarization."
[18] M. Banko, V. O. Mittal, and M. J. Witbrock, “Headline generation based on statistical translation,” in Proceedings of the 38th Annual Meeting on Association for Computational Linguistics, Hong Kong, 2000, pp. 318–325.
[19] B. Dorr, D. Zajic, and R. Schwartz, Hedge Trimmer: A Parse-and-Trim Approach to Headline Generation, 2003.
[20] K. Matsumaru, S. Takase, and N. Okazaki, “Improving Truthfulness of Headline Generation,” 2020.
[21] S. Takase, J. Suzuki, N. Okazaki, T. Hirao, and M. Nagata, "Neural headline generation on abstract meaning representation." pp. 1054-1059.
[22] D. Z. R. Schwartz, B. E. Door, and R. M. Schwartz, "Automatic Headline Generation for Newspaper Stories."
[23] Y. Liu, M. Ott, N. Goyal, J. Du, M. Joshi, D. Chen, O. Levy, M. Lewis, L. Zettlemoyer, and V. Stoyanov, “RoBERTa: A Robustly Optimized BERT Pretraining Approach,” ArXiv, vol. abs/1907.11692, 2019.
[24] W. Xiao, I. Beltagy, G. Carenini, and A. Cohan, "PRIMERA: Pyramid-based Masked Sentence Pre-training for Multi-document Summarization," Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). pp. 5245-5263.
[25] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding,” ArXiv, vol. abs/1810.04805, 2019.
[26] T. Mikolov, I. Sutskever, K. Chen, G. Corrado, and J. Dean, “Distributed representations of words and phrases and their compositionality,” in Proceedings of the 26th International Conference on Neural Information Processing Systems - Volume 2, Lake Tahoe, Nevada, 2013, pp. 3111–3119.
[27] A. Radford, and K. Narasimhan, "Improving Language Understanding by Generative Pre-Training."
[28] Z. Lan, M. Chen, S. Goodman, K. Gimpel, P. Sharma, and R. Soricut, “Albert: A lite bert for self-supervised learning of language representations,” arXiv preprint arXiv:1909.11942, 2019.
[29] M. Lewis, Y. Liu, N. Goyal, M. Ghazvininejad, A. Mohamed, O. Levy, V. Stoyanov, and L. Zettlemoyer, “Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension,” arXiv preprint arXiv:1910.13461, 2019.
[30] J. Zhang, Y. Zhao, M. Saleh, and P. J. Liu, “PEGASUS: pre-training with extracted gap-sentences for abstractive summarization,” in Proceedings of the 37th International Conference on Machine Learning, 2020, pp. Article 1051.
[31] B. Guo, Y. Gong, Y. Shen, S. Han, H. Huang, N. Duan, and W. Chen, “GENIUS: Sketch-based Language Model Pre-training via Extreme and Selective Masking for Text Generation and Augmentation,” arXiv preprint arXiv:2211.10330, 2022.
[32] R. Campos, V. Mangaravite, A. Pasquali, A. Jorge, C. Nunes, and A. Jatowt, “YAKE! Keyword extraction from single documents using multiple local features,” Information sciences, vol. 509, pp. 257-289, 2020.
[33] M. Lewis, Y. Liu, N. Goyal, M. Ghazvininejad, A. Mohamed, O. Levy, V. Stoyanov, and L. Zettlemoyer, “BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension,” arXiv pre-print server, 2019-10-29, 2019.
[34] D. G. Ghalandari, C. Hokamp, N. T. Pham, J. Glover, and G. Ifrim, “A Large-Scale Multi-Document Summarization Dataset from the Wikipedia Current Events Portal,” 2020.
[35] H. Touvron, T. Lavril, G. Izacard, X. Martinet, M.-A. Lachaux, T. e. Lacroix, B. Rozi\"ere, N. Goyal, E. Hambro, F. Azhar, A. Rodriguez, A. Joulin, E. Grave, and G. Lample, “LLaMA: Open and Efficient Foundation Language Models,” arXiv pre-print server, 2023-02-27, 2023.
[36] W.-L. Chiang, Z. Li, Z. Lin, Y. Sheng, Z. Wu, H. Zhang, L. Zheng, S. Zhuang, Y. Zhuang, J. E. Gonzalez, I. Stoica, and E. P. Xing, “Vicuna: An Open-Source Chatbot Impressing GPT-4 with 90% ChatGPT Quality,” 2023.
[37] G. Penedo, Q. Malartic, D. Hesslow, R. Cojocaru, A. Cappelli, H. Alobeidli, B. Pannier, E. Almazrouei, and J. Launay, “The RefinedWeb Dataset for Falcon LLM: Outperforming Curated Corpora with Web Data, and Web Data Only,” arXiv pre-print server, 2023-06-01, 2023.
[38] J. Wei, M. Bosma, Vincent, K. Guu, Adams, B. Lester, N. Du, Andrew, and Quoc, “Finetuned Language Models Are Zero-Shot Learners,” arXiv pre-print server, 2021-09-03, 2021.
[39] Edward, Y. Shen, P. Wallis, Z. Allen-Zhu, Y. Li, S. Wang, L. Wang, and W. Chen, “LoRA: Low-Rank Adaptation of Large Language Models,” arXiv pre-print server, 2021-10-16, 2021.
[40] C.-Y. Lin, "ROUGE: A Package for Automatic Evaluation of Summaries."
[41] K. Papineni, S. Roukos, T. Ward, and W.-J. Zhu, "Bleu: a method for automatic evaluation of machine translation." pp. 311-318.
[42] S. Banerjee, and A. Lavie, "METEOR: An automatic metric for MT evaluation with improved correlation with human judgments." pp. 65-72.
[43] T. Zhang, V. Kishore, F. Wu, K. Q. Weinberger, and Y. Artzi, “Bertscore: Evaluating text generation with bert,” arXiv preprint arXiv:1904.09675, 2019.
[44] C. Raffel, N. Shazeer, A. Roberts, K. Lee, S. Narang, M. Matena, Y. Zhou, W. Li, and P. J. Liu, “Exploring the limits of transfer learning with a unified text-to-text transformer,” Journal of machine learning research, vol. 21, 2020.
[45] Y. Li, “Deep Reinforcement Learning,” arXiv pre-print server, 2018-10-15, 2018.
zh_TW