Publications-Theses

Article View/Open

Publication Export

Google ScholarTM

NCCU Library

Citation Infomation

Related Publications in TAIR

Title以分群、S-BERT與ChatGPT應用之深度學習方法於勞訴類案推薦
Deep Learning Methods Using Clustering, S-BERT, and ChatGPT for Recommending Similar Labor and Employment Cases
Creator吳柏憲
Wu, Po-Hsien
Contributor劉昭麟
Liu, Chao-Lin
吳柏憲
Wu, Po-Hsien
Key Words民事訴訟
類似案件推薦
語意分群
語意分類
卷積神經網路
雙向長短記憶網路
S-BERT
大型語言模型
ChatGPT
civil cases
Similar cases recommendation
Machine learning
Semantic clustering
Semantic classification
Convolutional neural networks
Bi-directional long short-term memory
Large language models
ChatGPT
Date2025
Date Issued4-Feb-2025 15:44:06 (UTC+8)
Summary為加速勞資爭議訴訟相關裁判之審理進程,法院會依據聲請人與相對人兩造,即勞資雙方所有爭執之事項進行條列,是為「爭點列表」,並逐一對其進行審理。 然考量近年來勞資爭議相關訴訟有上升之趨勢,對一般大眾而言,查找具相似情事之裁判書並給出具可解釋性相似性理由之自動化方法尤為重要。 因此,本研究將結合傳統機器學習方法 (分群、邏輯迴歸、集成學習) 與深度學習方法 (S-BERT、大型語言模型、CNN、BiLSTM) ,建構一類似案件推薦系統,並對索引與推薦之案件標注不同指涉事項之對應相似段落,以期能減少一般大眾查找與篩選類似案件之負擔。 本研究以前沿研究之標記資料與裁判書資料基礎下,將預測準確率由70%提升至77%,但考慮到模型需具客觀能效之比較,我們亦在後續章節與其他類似案件推薦研究之架構進行能效對比。 本研究主要貢獻為:基於敘述句分群結果產生對應內容之微調(fine-tune)訓練S-BERT模型之資料與方法、以兩造主張段落及爭點列表進行類似案件推薦、僅以原告主張段落與根據Chatgpt產生之摘要進行類似案件推薦。 鑒於各級司法機關新收案件數量皆有上升之趨勢,本研究亦在無標記基礎下產生微調訓練S-BERT模型之資料與方法、使用ChatGPT抽取摘要之prompt,及相對通用之可解釋性標注與推薦架構 (僅需主張段落與該段落之摘要) ,以期將該方法推廣至其他類別之民事訴訟,乃至其他領域。
In order to speed up the adjudication process of labor litigation, the court will list all the disputes between the plaintiff and the defendant, that is, the disputes between the employer and employee, as a "list of disputes", and review them one by one. However, considering that the number of lawsuits related to labor disputes is increasing these years, it is particularly important for general public to find automated methods that find judgments with similar situations and provide explainable reasons for the similarities. Therefore, this research will combine traditional machine learning methods (clustering, logistic regression, ensemble learning) and deep learning methods (S-BERT, large language models, CNN, BiLSTM) to construct a similar case recommendation system, and perform indexing and recommendation cases are marked with corresponding similar paragraphs that refer to different matters, in order to reduce the burden on general public to find and screen similar cases. Based on the label data and judgment data from cutting-edge research, this study increased the prediction accuracy from 70% to 77%. However, considering that the model needs to have objective performance comparisons, we will also recommend research with other similar cases in subsequent chapters. Architectures are compared for performance. The main contributions of this study are: the materials and methods for training the S-BERT model based on fine-tuning the corresponding content generated by the clustering results of narrative sentences, recommending similar cases using two relay paragraphs and argument lists, and using only the plaintiff’s claim paragraph and summary generated based on ChatGPT for similar case recommendations. In view of the increasing trend in the number of new cases received by judicial agencies at all levels, this study also produces data and methods for fine-tuning and training the S-BERT model on a label-free basis, prompts for extracting summaries using ChatGPT, and relatively universal interpretable annotations. and a recommended structure (only the claim paragraph and a summary of the paragraph are required), with a view to extending this method to other types of civil litigation and even other fields.
參考文獻 [1] Yoosuf, S.,Yang, Y., “Fine-grained propaganda detection with fine-tuned BERT.,” In Proceedings of the second workshop on natural language processing for internet freedom: censorship, disinformation, and propaganda, pp. 87-91, 2019. [2] Shao, Y., Shao, T., Wang, M., Wang, P., & Gao, J., “A sentiment and style controllable approach for Chinese poetry generation.,” In Proceedings of the 30th ACM International Conference on Information & Knowledge Management, pp. 4784-4788, 2021. [3] Liu, C.-L. & Liu, Y.-F., “Some Practical Analyses of the Judgment Documents of Labor Litigations for Social Conflicts and Similar Cases.,” LegalAIIA 2023, pp. 100-109, 2023. [4] Reimers, N., & Gurevych, I., “Sentence-bert: Sentence embeddings using siamese bert-networks.,” arXiv preprint, 2019. Available: https://arxiv.org/abs/1908.10084. [5] Brown, T. B., “Language models are few-shot learners.,” arXiv preprint, 2020. Available: https://arxiv.org/abs/2005.14165. [6] Sutskever, I., Vinyals, O., & Le, Q. V., “Sequence to sequence learning with neural networks.,” arXiv preprint, 2014. Available: https://arxiv.org/abs/1409.3215. [7] Vaswani, A., “Attention is all you need.,” arXiv preprint, 2017. Available: https://arxiv.org/abs/1706.03762. [8] Devlin, J., “Bert: Pre-training of deep bidirectional transformers for language understanding.,” arXiv preprint, 2018. Available: https://arxiv.org/abs/1810.04805. [9] Xu, Z., “RoBERTa-WWM-EXT fine-tuning for Chinese text classification.,” arXiv preprint, 2021. Available: https://arxiv.org/abs/2103.00492. [10] Xiao, C., Hu, X., Liu, Z., Tu, C., & Sun, M., “Lawformer: A pre-trained language model for chinese legal long documents.,” AI Open, 2, pp. 79-84, 2021. [11] Bommarito, M.J., & Katz, D.M., “GPT Takes the Bar Exam.,” ArXiv preprint, 2022. Available: https://arxiv.org/abs/2212.14402. [12] Katz, D. M., Bommarito, M. J., Gao, S., & Arredondo, P., “Gpt-4 passes the bar exam.,” SSRN, 2024. Available: https://ssrn.com/abstract=4389233. [13] Hong, Z., Zhou, Q., Zhang, R., Li, W., & Mo, T., “Legal feature enhanced semantic matching network for similar case matching.,” In 2020 International Joint Conference on Neural Networks (IJCNN), pp. 1-8, 2020. [14] Bhattacharya, P., Ghosh, K., Pal, A., & Ghosh, S., “Methods for computing legal document similarity: A comparative study.,” arXiv preprint, 2020. Available: https://arxiv.org/abs/2004.12307.. [15] Kumar, S., Reddy, P. K., Reddy, V. B., & Singh, A., “Similarity analysis of legal judgments.,” In Proceedings of the fourth annual ACM Bangalore conference, pp. 1-4, 2011. [16] Raghav, K., Balakrishna Reddy, P., Balakista Reddy, V., & Krishna Reddy, P., “Text and citations based cluster analysis of legal judgments.,” In Mining Intelligence and Knowledge Exploration: Third International Conference, pp. 449-459, 2015. [17] Pang, L., Lan, Y., Guo, J., Xu, J., Wan, S., & Cheng, X., “Text matching as image recognition.,” In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, pp. 2793-2799, 2016. [18] Yang, W., Jia, W., Zhou, X., & Luo, Y., “Legal judgment prediction via multi-perspective bi-feedback network.,” arXiv preprint, 2019. Available: https://arxiv.org/abs/1905.03969. [19] Campello, R. J., Moulavi, D., & Sander, J., “Density-based clustering based on hierarchical density estimates.,” Pacific-Asia conference on knowledge discovery and data mining, pp. 160-172, 2013. [20] Frey, B. J., & Dueck, D., “Clustering by passing messages between data points.,” science, 315, 5814, pp. 972-976, 2007. [21] Hadsell, R., Chopra, S., & LeCun, Y., “Dimensionality reduction by learning an invariant mapping.,” In 2006 IEEE computer society conference on computer vision and pattern recognition (CVPR'06), 2, pp. 1735-1742, June 2006. [22] Diederik, P. K., “Adam: A method for stochastic optimization.,” arXiv preprint, 2014. Available: https://arxiv.org/abs/1412.6980. [23] Xiao, C., Zhong, H., Guo, Z., Tu, C., Liu, Z., Sun, M., ... & Xu, J., “Cail2019-scm: A dataset of similar case matching in legal domain.,” arXiv preprint, 2019. Available: https://arxiv.org/abs/1911.08962. [24] Liu, Y. F., Liu, C. L., & Yang, C., “Clustering Issues in Civil Judgments for Recommending Similar Cases.,” In Proceedings of the 34th Conference on Computational Linguistics and Speech Processing (ROCLING 2022), pp. 184-192, 2022. [25] Müllner, D., “Modern hierarchical, agglomerative clustering algorithms.,” arXiv preprint, 2011. Available: https://arxiv.org/abs/1109.2378. [26] Wang, K., Zhang, J., Li, D., Zhang, X., & Guo, T., “Adaptive affinity propagation clustering.,” arXiv preprint, 2008. Available: https://arxiv.org/abs/0805.1096.
Description碩士
國立政治大學
資訊科學系
111753120
資料來源 http://thesis.lib.nccu.edu.tw/record/#G0111753120
Typethesis
dc.contributor.advisor 劉昭麟zh_TW
dc.contributor.advisor Liu, Chao-Linen_US
dc.contributor.author (Authors) 吳柏憲zh_TW
dc.contributor.author (Authors) Wu, Po-Hsienen_US
dc.creator (作者) 吳柏憲zh_TW
dc.creator (作者) Wu, Po-Hsienen_US
dc.date (日期) 2025en_US
dc.date.accessioned 4-Feb-2025 15:44:06 (UTC+8)-
dc.date.available 4-Feb-2025 15:44:06 (UTC+8)-
dc.date.issued (上傳時間) 4-Feb-2025 15:44:06 (UTC+8)-
dc.identifier (Other Identifiers) G0111753120en_US
dc.identifier.uri (URI) https://nccur.lib.nccu.edu.tw/handle/140.119/155453-
dc.description (描述) 碩士zh_TW
dc.description (描述) 國立政治大學zh_TW
dc.description (描述) 資訊科學系zh_TW
dc.description (描述) 111753120zh_TW
dc.description.abstract (摘要) 為加速勞資爭議訴訟相關裁判之審理進程,法院會依據聲請人與相對人兩造,即勞資雙方所有爭執之事項進行條列,是為「爭點列表」,並逐一對其進行審理。 然考量近年來勞資爭議相關訴訟有上升之趨勢,對一般大眾而言,查找具相似情事之裁判書並給出具可解釋性相似性理由之自動化方法尤為重要。 因此,本研究將結合傳統機器學習方法 (分群、邏輯迴歸、集成學習) 與深度學習方法 (S-BERT、大型語言模型、CNN、BiLSTM) ,建構一類似案件推薦系統,並對索引與推薦之案件標注不同指涉事項之對應相似段落,以期能減少一般大眾查找與篩選類似案件之負擔。 本研究以前沿研究之標記資料與裁判書資料基礎下,將預測準確率由70%提升至77%,但考慮到模型需具客觀能效之比較,我們亦在後續章節與其他類似案件推薦研究之架構進行能效對比。 本研究主要貢獻為:基於敘述句分群結果產生對應內容之微調(fine-tune)訓練S-BERT模型之資料與方法、以兩造主張段落及爭點列表進行類似案件推薦、僅以原告主張段落與根據Chatgpt產生之摘要進行類似案件推薦。 鑒於各級司法機關新收案件數量皆有上升之趨勢,本研究亦在無標記基礎下產生微調訓練S-BERT模型之資料與方法、使用ChatGPT抽取摘要之prompt,及相對通用之可解釋性標注與推薦架構 (僅需主張段落與該段落之摘要) ,以期將該方法推廣至其他類別之民事訴訟,乃至其他領域。zh_TW
dc.description.abstract (摘要) In order to speed up the adjudication process of labor litigation, the court will list all the disputes between the plaintiff and the defendant, that is, the disputes between the employer and employee, as a "list of disputes", and review them one by one. However, considering that the number of lawsuits related to labor disputes is increasing these years, it is particularly important for general public to find automated methods that find judgments with similar situations and provide explainable reasons for the similarities. Therefore, this research will combine traditional machine learning methods (clustering, logistic regression, ensemble learning) and deep learning methods (S-BERT, large language models, CNN, BiLSTM) to construct a similar case recommendation system, and perform indexing and recommendation cases are marked with corresponding similar paragraphs that refer to different matters, in order to reduce the burden on general public to find and screen similar cases. Based on the label data and judgment data from cutting-edge research, this study increased the prediction accuracy from 70% to 77%. However, considering that the model needs to have objective performance comparisons, we will also recommend research with other similar cases in subsequent chapters. Architectures are compared for performance. The main contributions of this study are: the materials and methods for training the S-BERT model based on fine-tuning the corresponding content generated by the clustering results of narrative sentences, recommending similar cases using two relay paragraphs and argument lists, and using only the plaintiff’s claim paragraph and summary generated based on ChatGPT for similar case recommendations. In view of the increasing trend in the number of new cases received by judicial agencies at all levels, this study also produces data and methods for fine-tuning and training the S-BERT model on a label-free basis, prompts for extracting summaries using ChatGPT, and relatively universal interpretable annotations. and a recommended structure (only the claim paragraph and a summary of the paragraph are required), with a view to extending this method to other types of civil litigation and even other fields.en_US
dc.description.tableofcontents 摘要 i Abstract ii 目錄 iii 表目錄 vi 圖目錄 vii 第一章 緒論 1 第一節 研究背景與動機 1 第二節 研究目的 1 第三節 主要貢獻 2 第四節 論文架構 2 第二章 文獻回顧 3 第一節 與本研究相關之NLP技術 3 一、語言模型演進 3 二、基於語意相似度訓練之語言模型 4 三、大型語言模型 4 第二節 類神經網路 5 第三節 法律文本相似性研究 5 第三章 語料來源與研究架構 7 第一節 研究架構 7 一、語料前處理 7 二、特徵擷取 7 三、類案預測 7 第二節 語料來源 8 第三節 以ChatGPT抽取爭點 9 第四章 語料前處理 11 第一節 語料與標記資料介紹 11 一、單一標記者之標記 11 二、三位標記者之多數決標記 12 第二節 S-BERT fine-tuning 13 第三節 爭點句向量化與分群 14 第五章 特徵擷取 16 第一節 分群交集之三值化關係分支 16 第二節 CNN分支 17 第三節 其他特徵關係分支 19 第四節 BiLSTM分支 20 第五節 基於多方法之迴歸預測 21 第六章 實驗結果 22 第一節 實驗介紹 22 第二節 不同方法組合之對比實驗 22 第三節 S-BERT模型對比實驗 23 第四節 爭點對比實驗 24 第五節 相關論文模型架構對比實驗 26 一、效能比較之參照模型介紹 26 二、資料準備 26 第七章 類似案件預測與推薦 28 第一節 具可解釋性之類案預測 28 一、舊案既有分群之中心句搜索 28 二、新案爭點句抽取 28 三、新案爭點句之分群標記 28 四、索引案件之類案推薦 29 五、索引案件與被推薦案件之關聯可視化 29 第二節 不同推薦基礎之效能對比 32 第八章 結論與未來展望 34 參考文獻 35 附錄一 38 附錄二 41 附錄三 45zh_TW
dc.format.extent 2206489 bytes-
dc.format.mimetype application/pdf-
dc.source.uri (資料來源) http://thesis.lib.nccu.edu.tw/record/#G0111753120en_US
dc.subject (關鍵詞) 民事訴訟zh_TW
dc.subject (關鍵詞) 類似案件推薦zh_TW
dc.subject (關鍵詞) 語意分群zh_TW
dc.subject (關鍵詞) 語意分類zh_TW
dc.subject (關鍵詞) 卷積神經網路zh_TW
dc.subject (關鍵詞) 雙向長短記憶網路zh_TW
dc.subject (關鍵詞) S-BERTzh_TW
dc.subject (關鍵詞) 大型語言模型zh_TW
dc.subject (關鍵詞) ChatGPTzh_TW
dc.subject (關鍵詞) civil casesen_US
dc.subject (關鍵詞) Similar cases recommendationen_US
dc.subject (關鍵詞) Machine learningen_US
dc.subject (關鍵詞) Semantic clusteringen_US
dc.subject (關鍵詞) Semantic classificationen_US
dc.subject (關鍵詞) Convolutional neural networksen_US
dc.subject (關鍵詞) Bi-directional long short-term memoryen_US
dc.subject (關鍵詞) Large language modelsen_US
dc.subject (關鍵詞) ChatGPTen_US
dc.title (題名) 以分群、S-BERT與ChatGPT應用之深度學習方法於勞訴類案推薦zh_TW
dc.title (題名) Deep Learning Methods Using Clustering, S-BERT, and ChatGPT for Recommending Similar Labor and Employment Casesen_US
dc.type (資料類型) thesisen_US
dc.relation.reference (參考文獻) [1] Yoosuf, S.,Yang, Y., “Fine-grained propaganda detection with fine-tuned BERT.,” In Proceedings of the second workshop on natural language processing for internet freedom: censorship, disinformation, and propaganda, pp. 87-91, 2019. [2] Shao, Y., Shao, T., Wang, M., Wang, P., & Gao, J., “A sentiment and style controllable approach for Chinese poetry generation.,” In Proceedings of the 30th ACM International Conference on Information & Knowledge Management, pp. 4784-4788, 2021. [3] Liu, C.-L. & Liu, Y.-F., “Some Practical Analyses of the Judgment Documents of Labor Litigations for Social Conflicts and Similar Cases.,” LegalAIIA 2023, pp. 100-109, 2023. [4] Reimers, N., & Gurevych, I., “Sentence-bert: Sentence embeddings using siamese bert-networks.,” arXiv preprint, 2019. Available: https://arxiv.org/abs/1908.10084. [5] Brown, T. B., “Language models are few-shot learners.,” arXiv preprint, 2020. Available: https://arxiv.org/abs/2005.14165. [6] Sutskever, I., Vinyals, O., & Le, Q. V., “Sequence to sequence learning with neural networks.,” arXiv preprint, 2014. Available: https://arxiv.org/abs/1409.3215. [7] Vaswani, A., “Attention is all you need.,” arXiv preprint, 2017. Available: https://arxiv.org/abs/1706.03762. [8] Devlin, J., “Bert: Pre-training of deep bidirectional transformers for language understanding.,” arXiv preprint, 2018. Available: https://arxiv.org/abs/1810.04805. [9] Xu, Z., “RoBERTa-WWM-EXT fine-tuning for Chinese text classification.,” arXiv preprint, 2021. Available: https://arxiv.org/abs/2103.00492. [10] Xiao, C., Hu, X., Liu, Z., Tu, C., & Sun, M., “Lawformer: A pre-trained language model for chinese legal long documents.,” AI Open, 2, pp. 79-84, 2021. [11] Bommarito, M.J., & Katz, D.M., “GPT Takes the Bar Exam.,” ArXiv preprint, 2022. Available: https://arxiv.org/abs/2212.14402. [12] Katz, D. M., Bommarito, M. J., Gao, S., & Arredondo, P., “Gpt-4 passes the bar exam.,” SSRN, 2024. Available: https://ssrn.com/abstract=4389233. [13] Hong, Z., Zhou, Q., Zhang, R., Li, W., & Mo, T., “Legal feature enhanced semantic matching network for similar case matching.,” In 2020 International Joint Conference on Neural Networks (IJCNN), pp. 1-8, 2020. [14] Bhattacharya, P., Ghosh, K., Pal, A., & Ghosh, S., “Methods for computing legal document similarity: A comparative study.,” arXiv preprint, 2020. Available: https://arxiv.org/abs/2004.12307.. [15] Kumar, S., Reddy, P. K., Reddy, V. B., & Singh, A., “Similarity analysis of legal judgments.,” In Proceedings of the fourth annual ACM Bangalore conference, pp. 1-4, 2011. [16] Raghav, K., Balakrishna Reddy, P., Balakista Reddy, V., & Krishna Reddy, P., “Text and citations based cluster analysis of legal judgments.,” In Mining Intelligence and Knowledge Exploration: Third International Conference, pp. 449-459, 2015. [17] Pang, L., Lan, Y., Guo, J., Xu, J., Wan, S., & Cheng, X., “Text matching as image recognition.,” In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, pp. 2793-2799, 2016. [18] Yang, W., Jia, W., Zhou, X., & Luo, Y., “Legal judgment prediction via multi-perspective bi-feedback network.,” arXiv preprint, 2019. Available: https://arxiv.org/abs/1905.03969. [19] Campello, R. J., Moulavi, D., & Sander, J., “Density-based clustering based on hierarchical density estimates.,” Pacific-Asia conference on knowledge discovery and data mining, pp. 160-172, 2013. [20] Frey, B. J., & Dueck, D., “Clustering by passing messages between data points.,” science, 315, 5814, pp. 972-976, 2007. [21] Hadsell, R., Chopra, S., & LeCun, Y., “Dimensionality reduction by learning an invariant mapping.,” In 2006 IEEE computer society conference on computer vision and pattern recognition (CVPR'06), 2, pp. 1735-1742, June 2006. [22] Diederik, P. K., “Adam: A method for stochastic optimization.,” arXiv preprint, 2014. Available: https://arxiv.org/abs/1412.6980. [23] Xiao, C., Zhong, H., Guo, Z., Tu, C., Liu, Z., Sun, M., ... & Xu, J., “Cail2019-scm: A dataset of similar case matching in legal domain.,” arXiv preprint, 2019. Available: https://arxiv.org/abs/1911.08962. [24] Liu, Y. F., Liu, C. L., & Yang, C., “Clustering Issues in Civil Judgments for Recommending Similar Cases.,” In Proceedings of the 34th Conference on Computational Linguistics and Speech Processing (ROCLING 2022), pp. 184-192, 2022. [25] Müllner, D., “Modern hierarchical, agglomerative clustering algorithms.,” arXiv preprint, 2011. Available: https://arxiv.org/abs/1109.2378. [26] Wang, K., Zhang, J., Li, D., Zhang, X., & Guo, T., “Adaptive affinity propagation clustering.,” arXiv preprint, 2008. Available: https://arxiv.org/abs/0805.1096.zh_TW