學術產出-Theses

Article View/Open

Publication Export

Google ScholarTM

政大圖書館

Citation Infomation

題名 應用機器學習技術於中文裁判書之要旨抽取
Application of Machine Learning for Extraction of Gist of Chinese Judgments
作者 鄭禔雍
Zheng, Ti-Yong
貢獻者 劉昭麟
Liu, Chao-Lin
鄭禔雍
Zheng, Ti-Yong
關鍵詞 裁判書要旨抽取
法律科技應用
深度學習
機器學習
自動文本摘要
日期 2021
上傳時間 2-Mar-2021 14:32:46 (UTC+8)
摘要 法律用語中之『裁判』,依刑事訴訟法第220條規定:「裁判,除依本法應已判決行之者外,以裁定之」。主要依形式區分為『裁定』與『判決』兩類,而就裁定與判決事項所下判斷之裁定書與判決書,合稱『裁判書』。
裁判書為法律工作者或訴訟相關人士在處理法律問題時的重要參考資料,包含著法院對於特定法律問題的見解。然而,裁判書中亦包含著大量無法適用於其他類型案件之資訊,故常需花費大量時間精讀。若能透過閱讀專業法律工作者所製作具有參考價值之『裁判要旨』,便能透過裁判要旨快速領略裁判書之摘要與重點。但由於製作裁判要旨亦需花費大量時間、精力,故大部分裁判書目前仍不具有人工製作之裁判要旨。
在法院所製作之裁判要旨中,多為節錄原先裁判書內文中之敘述,故應適用於機器學習之抽取式自動文本摘要技術。若能透過此一技術輔助法律工作者製作裁判書之裁判要旨,應能進一步提升製作裁判書要旨之效率。
本研究將抽取式自動文本摘要視為二元分類任務,使用深度神經網路搭建分類模型,進行了不同上下文長度實驗、不同嵌入模型實驗、加入不同特徵實驗、不同深度神經網路的實驗,最終發現在使用BiLSTM和BiGRU作為模型中深度神經網路結構的實驗效果最佳,最後更通過使用bagging的投票機制進一步提升模型分類效果。
由於裁判書中要旨遠比非要旨敘述來得更少,故資料類別比例十分失衡。在這樣的情況下,本研究所提出的模型在地方法院裁判書資料集中F1之分數可達0.547、高等法院裁判書資料集中F1之分數可達0.492、最高法院裁判書資料集中F1之分數可達0.576,可證實分類模型有確實學習到如何抽取裁判書中的裁判要旨。
The judgments of courts are important reference material for legal practitioners or persons involved in litigation when they dealing with legal issues, they contain the court’s opinions on specific legal issues.
However, the judgments also contain a lot of information that cannot be suitable for the other types of cases, so we need spend a lot of time to read and understand. If we can read a good gist of judgment which produced by professional legal practitioners, we can quickly understand the summary and key points of the judgment via the gist of judgment.
Since it takes a lot of times and energy to extract the gist of judgment, most of the judgments of courts do not have the gist, currently.
Most of the gist of judgment produced by the court is an excerpt from the paragraph in the original judgment, so we should be able to imitate this action to use machine learn technique to train a extractive automatic text summarization model. If we can build a system to assist legal practitioners to excerpt the gist of judgment, it should be able to further improve the efficiency of making the gist of judgment.
In this research, extractive automatic text summarization is regarded as a binary classification task and we can use deep neural network to build a classification model. We experiment with different context length experiments, different embedding models experiments and different deep neural networks experiments. We found that using BiLSTM to build the classification model structure are the best.
Finally, we use ensemble learning “bagging” to improve the classification effect.
F1 score is 0.547 in the District Court judgment data set.
F1 score is 0.493 in the District Court judgment data set.
F1 score is 0.576 in the Supreme Court judgment data set.
參考文獻 [1] 中央研究院詞庫小組(CKIP 中文斷詞系統), from: https://ckipsvr.iis.sinica.edu.tw
[2] 何君豪。2006。階層式分群法在民事裁判要旨分群上之應用。碩士論文。國立政治大學,臺北市,臺灣。
[3] 陳冠群。2018。中文裁判書之要旨擷取:以最高法院裁判書為例。碩士論文。國立政治大學,臺北市,臺灣。
[4] 黃詩淳及邵軒磊。2017。運用機器學習預測法院裁判——法資訊學之實踐。月旦法學雜誌,第270期。86-96。DOI: 10.2966/102559312017110270006
[5] 廖鼎銘。2004。觸犯多款法條之賭博與竊盜案件的法院文書的分類與分析。碩士論文。國立政治大學,臺北市,臺灣。
[6] Abigail See, Perter J. Liu and Christopher D. Manning. 2017. Get to the point: Summarization with pointer-generator networks. arXiv preprint arXiv:1704.04368.
[7] Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei and Ilya Sutskever. 2019. Language Models are Unsupervised Multitask Learners. In OpenAI blog, 2019, 1(8): 9.
[8] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser and Illia Polosukhin. 2017. Attention is all you need. In Proceedings of Advances in neural information processing systems. 5998-6008.
[9] ConvertZ, from: http://alf-li.pcdiscuss.com/c_convertz.html
[10] Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Prismatic Inc, Steve J. Bethard and David Mcclosky. 2014. The Stanford CoreNLP natural language processing toolkit. In Proceedings of 52nd annual meeting of the association for computational linguistics: system demonstrations. 55-60.
[11] Daniel Jurafsky and James H. Martin. 2009. Speech and Language Processing (2Nd Edition). Upper Saddle River, NJ, USA. Prentice-Hall, Inc.
[12] Hochreiter and Sepp. 1998. The vanishing gradient problem during learning recurrent neural nets and problem solutions. In International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems. 6(02): 107-116.
[13] Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova. 2018. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv: 1810.04805.
[14] Jiatao Gu, Zhengdong Lu, Hang Li and Victor O.K. Li. 2016. Incorporating Copying Mechanism in Sequence-to-Sequence Learning. arXiv preprint arXiv:1603.06393.
[15] Jieba, from: https://github.com/fxsjy/jieba
[16] Jeffrey L. Elman. 1990. Finding structure in time. In Cognitive science, 14(2): 179-211.
[17] Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk and Yoshua Bengio. 2014. Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation. arXiv preprint arXiv:1406.1078.
[18] LeCun Yann, Bottou Lèon and Bengio Yoshua and Haffner Patrick. 1998. Gradient-based learning applied to document recognition. In Proceedings of the IEEE, 86(11): 2278-2324.
[19] Quoc Le and Tomas Mikolov. 2014. Distributed representations of sentences and documents. In Proceedings of International conference on machine learning. 1188-1196.
[20] Richard Maclin and David Opitz. 1997. An empirical evaluation of bagging and boosting. In Proceedings of AAAI. 546-551.
[21] Scikit-Learn, from: https://scikit-learn.org/stable/
[22] Sepp Hochreiter and Juergen Schmidhuber. 1997. Long short-term memory. In Neural Computation 9(8): 1735-1780. DOI:
http://dx.doi.org/10.1162/neco.1997.9.8.1735
[23] Simon Tong and Daphne Koller. 2001. Support vector machine active learning with applications to text classification. In Proceedings of Journal of machine learning research. 45-66.
[24] Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest and Alexander M. Rush. 2019. HuggingFace’s
Transformers: State-of-the-art Natural Language Processing. arXiv preprint arXiv:1910.03771.
[25] Tomas Mikolov, Kai Chen, Greg Corrado and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781.
[26] Yoon Kim. 2014. Convolutional neural networks for sentence classification. arXiv preprint arXiv:1408.5882.
[27] Yu-Lun Hsieh, Shih-Hung Liu, Kuan-Yu Chen, Hsin-Min Wang, Wen-Lian Hsu and Berlin Chen. 2016. 運用序列到序列生成架構於重寫式自動摘要 (Exploiting Sequence-to-Sequence Generation Framework for Automatic Abstractive Summarization)[In Chinese]. In Proceedings of the 28th Conference on Computational Linguistics and Speech Processing (ROCLING 2016). 115-128.
[28] Zhengyan Zhang, Xu Han, Zhiyuan Liu1, Xin Jiang, Maosong Sun and Qun Liu. 2019. ERNIE: Enhanced Language Representation with Informative Entities. arXiv preprint arXiv:1905.07129.
[29] Zhen-Zhong Lan, Ming-Da Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma and Radu Soricut. 2019. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:1909.11942.
[30] Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Russ R. Salakhutdinov and Quoc V. Le. 2019. XLNet: Generalized Autoregressive Pretraining for Language Understanding. In Advances in neural information processing systems. 5753-5763.
[31] Zhong-guo Li and Mao-song Sun. 2009. Punctuation as Implicit Annotations for Chinese Word Segmentation. In Computational Linguistics. 35(4): 505-512.
描述 碩士
國立政治大學
資訊科學系
107753037
資料來源 http://thesis.lib.nccu.edu.tw/record/#G0107753037
資料類型 thesis
dc.contributor.advisor 劉昭麟zh_TW
dc.contributor.advisor Liu, Chao-Linen_US
dc.contributor.author (Authors) 鄭禔雍zh_TW
dc.contributor.author (Authors) Zheng, Ti-Yongen_US
dc.creator (作者) 鄭禔雍zh_TW
dc.creator (作者) Zheng, Ti-Yongen_US
dc.date (日期) 2021en_US
dc.date.accessioned 2-Mar-2021 14:32:46 (UTC+8)-
dc.date.available 2-Mar-2021 14:32:46 (UTC+8)-
dc.date.issued (上傳時間) 2-Mar-2021 14:32:46 (UTC+8)-
dc.identifier (Other Identifiers) G0107753037en_US
dc.identifier.uri (URI) http://nccur.lib.nccu.edu.tw/handle/140.119/134087-
dc.description (描述) 碩士zh_TW
dc.description (描述) 國立政治大學zh_TW
dc.description (描述) 資訊科學系zh_TW
dc.description (描述) 107753037zh_TW
dc.description.abstract (摘要) 法律用語中之『裁判』,依刑事訴訟法第220條規定:「裁判,除依本法應已判決行之者外,以裁定之」。主要依形式區分為『裁定』與『判決』兩類,而就裁定與判決事項所下判斷之裁定書與判決書,合稱『裁判書』。
裁判書為法律工作者或訴訟相關人士在處理法律問題時的重要參考資料,包含著法院對於特定法律問題的見解。然而,裁判書中亦包含著大量無法適用於其他類型案件之資訊,故常需花費大量時間精讀。若能透過閱讀專業法律工作者所製作具有參考價值之『裁判要旨』,便能透過裁判要旨快速領略裁判書之摘要與重點。但由於製作裁判要旨亦需花費大量時間、精力,故大部分裁判書目前仍不具有人工製作之裁判要旨。
在法院所製作之裁判要旨中,多為節錄原先裁判書內文中之敘述,故應適用於機器學習之抽取式自動文本摘要技術。若能透過此一技術輔助法律工作者製作裁判書之裁判要旨,應能進一步提升製作裁判書要旨之效率。
本研究將抽取式自動文本摘要視為二元分類任務,使用深度神經網路搭建分類模型,進行了不同上下文長度實驗、不同嵌入模型實驗、加入不同特徵實驗、不同深度神經網路的實驗,最終發現在使用BiLSTM和BiGRU作為模型中深度神經網路結構的實驗效果最佳,最後更通過使用bagging的投票機制進一步提升模型分類效果。
由於裁判書中要旨遠比非要旨敘述來得更少,故資料類別比例十分失衡。在這樣的情況下,本研究所提出的模型在地方法院裁判書資料集中F1之分數可達0.547、高等法院裁判書資料集中F1之分數可達0.492、最高法院裁判書資料集中F1之分數可達0.576,可證實分類模型有確實學習到如何抽取裁判書中的裁判要旨。
zh_TW
dc.description.abstract (摘要) The judgments of courts are important reference material for legal practitioners or persons involved in litigation when they dealing with legal issues, they contain the court’s opinions on specific legal issues.
However, the judgments also contain a lot of information that cannot be suitable for the other types of cases, so we need spend a lot of time to read and understand. If we can read a good gist of judgment which produced by professional legal practitioners, we can quickly understand the summary and key points of the judgment via the gist of judgment.
Since it takes a lot of times and energy to extract the gist of judgment, most of the judgments of courts do not have the gist, currently.
Most of the gist of judgment produced by the court is an excerpt from the paragraph in the original judgment, so we should be able to imitate this action to use machine learn technique to train a extractive automatic text summarization model. If we can build a system to assist legal practitioners to excerpt the gist of judgment, it should be able to further improve the efficiency of making the gist of judgment.
In this research, extractive automatic text summarization is regarded as a binary classification task and we can use deep neural network to build a classification model. We experiment with different context length experiments, different embedding models experiments and different deep neural networks experiments. We found that using BiLSTM to build the classification model structure are the best.
Finally, we use ensemble learning “bagging” to improve the classification effect.
F1 score is 0.547 in the District Court judgment data set.
F1 score is 0.493 in the District Court judgment data set.
F1 score is 0.576 in the Supreme Court judgment data set.
en_US
dc.description.tableofcontents 1 緒論 1
1.1 研究背景 1
1.2 研究目的與動機 1
1.3 主要貢獻 1
1.4 論文架構 2
2 文獻回顧 3
2.1 應用機器學習技術於中文裁判書之相關研究 3
2.2 應用自然語言處理技術進行中文裁判書之相關研究 3
2.3 應用機器學習技術於中文自動文本摘要之相關研究 3
2.4 應用機器學習技術於英文自動文本摘要之相關研究 4
3 研究方法與步驟 5
3.1 實驗資料來源 5
3.2 實驗架構 5
4 資料前處理 7
4.1 中文編碼轉換 7
4.2 段落抽取 8
4.3 斷句 12
4.4 裁判書要旨分句與內文分句之匹配 14
4.5 斷詞 17
5 資料特徵抽取 20
5.1 基本特徵 20
5.1.1 分句特徵 21
5.1.2 文件特徵 21
5.2 TF-IDF 23
5.3 Word2Vec模型 24
5.4 Transformer模型 25
5.5 上下文特徵 27
6 抽取式自動文本摘要模型 29
6.1 傳統機器學習分類模型 29
6.1.1 Decision tree classifier 29
6.1.2 Logistic regression 30
6.1.3 Naïve Bayes classifier 31
6.1.4 Support vector machine classifier 31
6.2 深度神經網路模型 32
6.2.1 Fully-connected neural network 32
6.2.2 Recurrent neural network 33
6.2.3 Long short-term memory neural network 34
6.2.4 Gated recurrent unit neural network 34
6.2.5 Convolutional neural network 35
6.3 Ensemble learning 36
6.3.1 Bagging 36
6.3.2 Stacking 37
7 抽取式自動文本摘要實驗 38
7.1 實驗資料 38
7.2 實驗評量指標 39
7.3 實驗設定 40
7.3.1 實驗參數設定 41
7.3.2 資料集設定 41
7.4 傳統機器學習分類模型實驗 43
7.4.1 Decision tree classifier實驗 43
7.4.2 Logistic regression實驗 50
7.4.3 Naïve Bayes classifier實驗 53
7.4.4 Support vector machine classifier實驗 56
7.5 深度神經網路分類模型實驗 63
7.5.1 不同上下文長度實驗 63
7.5.2 不同嵌入模型實驗 70
7.5.3 不同特徵實驗 73
7.5.4 不同深度神經網路實驗 76
7.5.5 Ensemble learning實驗 79
7.5.6 切割更精確資料集之實驗 82
8 生成式自動文本摘要實驗 84
8.1 實驗資料 84
8.2 實驗評量指標 85
8.3 應用GPT-2訓練生成式自動文本摘要模型實驗 85
8.3.1 實驗設定 86
8.3.2 實驗結果 87
9 結論與未來展望 90
9.1 結論 90
9.2 未來展望 91
9.2.1 可能改進的部分:資料 91
9.2.2 可能改進的部分:資料前處理 91
9.2.3 可能改進的部分:嵌入模型 92
9.2.4 可能改進的部分:模型與評量指標 92
參考文獻 93
附錄A 論文口試紀錄 96
zh_TW
dc.format.extent 2691255 bytes-
dc.format.mimetype application/pdf-
dc.source.uri (資料來源) http://thesis.lib.nccu.edu.tw/record/#G0107753037en_US
dc.subject (關鍵詞) 裁判書要旨抽取zh_TW
dc.subject (關鍵詞) 法律科技應用zh_TW
dc.subject (關鍵詞) 深度學習zh_TW
dc.subject (關鍵詞) 機器學習zh_TW
dc.subject (關鍵詞) 自動文本摘要zh_TW
dc.title (題名) 應用機器學習技術於中文裁判書之要旨抽取zh_TW
dc.title (題名) Application of Machine Learning for Extraction of Gist of Chinese Judgmentsen_US
dc.type (資料類型) thesisen_US
dc.relation.reference (參考文獻) [1] 中央研究院詞庫小組(CKIP 中文斷詞系統), from: https://ckipsvr.iis.sinica.edu.tw
[2] 何君豪。2006。階層式分群法在民事裁判要旨分群上之應用。碩士論文。國立政治大學,臺北市,臺灣。
[3] 陳冠群。2018。中文裁判書之要旨擷取:以最高法院裁判書為例。碩士論文。國立政治大學,臺北市,臺灣。
[4] 黃詩淳及邵軒磊。2017。運用機器學習預測法院裁判——法資訊學之實踐。月旦法學雜誌,第270期。86-96。DOI: 10.2966/102559312017110270006
[5] 廖鼎銘。2004。觸犯多款法條之賭博與竊盜案件的法院文書的分類與分析。碩士論文。國立政治大學,臺北市,臺灣。
[6] Abigail See, Perter J. Liu and Christopher D. Manning. 2017. Get to the point: Summarization with pointer-generator networks. arXiv preprint arXiv:1704.04368.
[7] Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei and Ilya Sutskever. 2019. Language Models are Unsupervised Multitask Learners. In OpenAI blog, 2019, 1(8): 9.
[8] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser and Illia Polosukhin. 2017. Attention is all you need. In Proceedings of Advances in neural information processing systems. 5998-6008.
[9] ConvertZ, from: http://alf-li.pcdiscuss.com/c_convertz.html
[10] Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Prismatic Inc, Steve J. Bethard and David Mcclosky. 2014. The Stanford CoreNLP natural language processing toolkit. In Proceedings of 52nd annual meeting of the association for computational linguistics: system demonstrations. 55-60.
[11] Daniel Jurafsky and James H. Martin. 2009. Speech and Language Processing (2Nd Edition). Upper Saddle River, NJ, USA. Prentice-Hall, Inc.
[12] Hochreiter and Sepp. 1998. The vanishing gradient problem during learning recurrent neural nets and problem solutions. In International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems. 6(02): 107-116.
[13] Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova. 2018. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv: 1810.04805.
[14] Jiatao Gu, Zhengdong Lu, Hang Li and Victor O.K. Li. 2016. Incorporating Copying Mechanism in Sequence-to-Sequence Learning. arXiv preprint arXiv:1603.06393.
[15] Jieba, from: https://github.com/fxsjy/jieba
[16] Jeffrey L. Elman. 1990. Finding structure in time. In Cognitive science, 14(2): 179-211.
[17] Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk and Yoshua Bengio. 2014. Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation. arXiv preprint arXiv:1406.1078.
[18] LeCun Yann, Bottou Lèon and Bengio Yoshua and Haffner Patrick. 1998. Gradient-based learning applied to document recognition. In Proceedings of the IEEE, 86(11): 2278-2324.
[19] Quoc Le and Tomas Mikolov. 2014. Distributed representations of sentences and documents. In Proceedings of International conference on machine learning. 1188-1196.
[20] Richard Maclin and David Opitz. 1997. An empirical evaluation of bagging and boosting. In Proceedings of AAAI. 546-551.
[21] Scikit-Learn, from: https://scikit-learn.org/stable/
[22] Sepp Hochreiter and Juergen Schmidhuber. 1997. Long short-term memory. In Neural Computation 9(8): 1735-1780. DOI:
http://dx.doi.org/10.1162/neco.1997.9.8.1735
[23] Simon Tong and Daphne Koller. 2001. Support vector machine active learning with applications to text classification. In Proceedings of Journal of machine learning research. 45-66.
[24] Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest and Alexander M. Rush. 2019. HuggingFace’s
Transformers: State-of-the-art Natural Language Processing. arXiv preprint arXiv:1910.03771.
[25] Tomas Mikolov, Kai Chen, Greg Corrado and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781.
[26] Yoon Kim. 2014. Convolutional neural networks for sentence classification. arXiv preprint arXiv:1408.5882.
[27] Yu-Lun Hsieh, Shih-Hung Liu, Kuan-Yu Chen, Hsin-Min Wang, Wen-Lian Hsu and Berlin Chen. 2016. 運用序列到序列生成架構於重寫式自動摘要 (Exploiting Sequence-to-Sequence Generation Framework for Automatic Abstractive Summarization)[In Chinese]. In Proceedings of the 28th Conference on Computational Linguistics and Speech Processing (ROCLING 2016). 115-128.
[28] Zhengyan Zhang, Xu Han, Zhiyuan Liu1, Xin Jiang, Maosong Sun and Qun Liu. 2019. ERNIE: Enhanced Language Representation with Informative Entities. arXiv preprint arXiv:1905.07129.
[29] Zhen-Zhong Lan, Ming-Da Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma and Radu Soricut. 2019. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:1909.11942.
[30] Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Russ R. Salakhutdinov and Quoc V. Le. 2019. XLNet: Generalized Autoregressive Pretraining for Language Understanding. In Advances in neural information processing systems. 5753-5763.
[31] Zhong-guo Li and Mao-song Sun. 2009. Punctuation as Implicit Annotations for Chinese Word Segmentation. In Computational Linguistics. 35(4): 505-512.
zh_TW
dc.identifier.doi (DOI) 10.6814/NCCU202100220en_US