Publications-Theses

Article View/Open

Publication Export

Google ScholarTM

NCCU Library

Citation Infomation

Related Publications in TAIR

題名 訓練樣本分布對聯盟式學習成效之影響評估
Evaluating the Performance of Federated Learning Across Different Training Sample Distributions
作者 林書羽
Lin, Shu-Yu
貢獻者 廖文宏
Liao, Wen-Hung
林書羽
Lin, Shu-Yu
關鍵詞 聯盟式學習
圖像分類
深度學習
資料平衡性
Federated Learning
Image Classification
Deep Learning
Data Balance
日期 2023
上傳時間 6-Jul-2023 16:22:37 (UTC+8)
摘要 聯盟式學習是一種能夠有效解決機器學習中面臨資料隱私與資料分散問題的新興機器學習技術,參與的客戶端能在保有自己資料隱私的前提下,聯合訓練以共享知識。
本論文探討在圖像分類任務中,聯盟式學習對於模型訓練的效益,並與傳統的集中式訓練相互比照分析。藉由將資料集模擬生成為獨立同分布和非獨立同分佈的配置方式,以及搭配多個協作單位數量的組合,觀察資料平衡性分布差異對於聯盟式學習的執行效益影響程度,而在非獨立同分佈方面,還特別討論了類別無交集的分配方式。
本研究透過深度學習方法,分別以搭載預訓練模型和重新訓練模型之方式,綜合討論單位數量的多寡和分佈特性,並以Top-1準確率和Top-5準確率評估聯合訓練之成果。
實驗結果顯示,聯合訓練的初始權重設定有著關鍵的影響性,隨機權重會使得模型表現較不穩定,而基準相同的權重則表現穩定且具有較為良好的準確率。此外,依據不同的資料配置方式,模型表現也會有所不同,其中獨立同分布的表現最佳,而非獨立同分佈中的不平衡分配次之、無交集分配最不理想。
Federated learning is an emerging machine learning technique that can effectively solve the problems of data privacy and data dispersion in machine learning, where the participating clients can share knowledge through joint training while maintaining the privacy of their own data.
This thesis explores the benefits of federated learning in model training for image classification tasks and compares it with traditional centralized training. By simulating datasets with independent identical distribution (IID) and non-independent identical distribution (non-IID), and varying the number of collaborating units, we observe how differences in training sample distribution affect the effectiveness of federated learning. Specifically, we discuss the special situation of non-intersecting classes in the case of non-independent identical distribution.
Using deep learning methods with both pre-trained and trained-from-scratch models, this study comprehensively discusses the impact of the number and distribution of units and evaluates the results of joint training based on Top-1 and Top-5 accuracy.
Experimental results show that the initial weight setting of joint training has a critical impact. Random weights lead to unstable model performance, while weights set based on the same criteria yield stable and more accurate results. Additionally, model performance varies depending on characteristics of data distribution. The performance of federated-learning model trained with independent identical distribution samples is the best, followed by imbalanced distribution in non-independent identical distribution, while non-intersecting class allocation is the least ideal.
參考文獻 [1] McMahan, B., Moore, E., Ramage, D., Hampson, S., & y Arcas, B. A. (2017, April). Communication-efficient learning of deep networks from decentralized data. In Artificial intelligence and statistics (pp. 1273-1282). PMLR.
[2] Yang, Q., Liu, Y., Chen, T., & Tong, Y. (2019). Federated machine learning: Concept and applications. ACM Transactions on Intelligent Systems and Technology (TIST), 10(2), 1-19.
[3] 維基百科:聯盟式學習定義。
https://en.wikipedia.org/wiki/Federated_learning
[4] Liu, Y., Kang, Y., Xing, C., Chen, T., & Yang, Q. (2020). A secure federated transfer learning framework. IEEE Intelligent Systems, 35(4), 70-82.
[5] Gentry, C. (2009, May). Fully homomorphic encryption using ideal lattices. In Proceedings of the forty-first annual ACM symposium on Theory of computing (pp. 169-178).
[6] Ho, Q., Cipar, J., Cui, H., Lee, S., Kim, J. K., Gibbons, P. B., ... & Xing, E. P. (2013). More effective distributed ml via a stale synchronous parallel parameter server. Advances in neural information processing systems, 26.
[7] Kairouz, P., McMahan, H. B., Avent, B., Bellet, A., Bennis, M., Bhagoji, A. N., ... & Zhao, S. (2021). Advances and open problems in federated learning. Foundations and Trends® in Machine Learning, 14(1–2), 1-210.
[8] Li, Q., Diao, Y., Chen, Q., & He, B. (2022, May). Federated learning on non-iid data silos: An experimental study. In 2022 IEEE 38th International Conference on Data Engineering (ICDE) (pp. 965-978). IEEE.
[9] ILSVRC歷年Top-5錯誤率
https://www.kaggle.com/getting-started/149448
[10] CIFAR-10 / CIFAR-100 資料集
https://www.cs.toronto.edu/~kriz/cifar.html
[11] Caltech-UCSD Birds-200-2011 資料集
https://paperswithcode.com/dataset/cub-200-2011
[12] LeCun, Y., Bottou, L., Bengio, Y., & Haffner, P. (1998). Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11), 2278-2324.
[13] MNIST 資料集
http://yann.lecun.com/exdb/mnist/
[14] Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2017). Imagenet classification with deep convolutional neural networks. Communications of the ACM, 60(6), 84-90.
[15] ImageNet 資料集
https://www.image-net.org/
[16] Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., ... & Rabinovich, A. (2015). Going deeper with convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1-9).
[17] He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770-778).
[18] Tiny-ImageNet 資料集
https://www.kaggle.com/c/tiny-imagenet/overview
[19] Beutel, D. J., Topal, T., Mathur, A., Qiu, X., Parcollet, T., de Gusmão, P. P., & Lane, N. D. (2020). Flower: A friendly federated learning research framework. arXiv preprint arXiv:2007.14390.
[20] Flower Framework
https://flower.dev/
[21] Tan, M., & Le, Q. (2019, May). Efficientnet: Rethinking model scaling for convolutional neural networks. In International conference on machine learning (pp. 6105-6114). PMLR.
[22] Tan, M., & Le, Q. (2021, July). Efficientnetv2: Smaller models and faster training. In International conference on machine learning (pp. 10096-10106). PMLR.
[23] OpenMMLab MMClassification github
https://github.com/open-mmlab/mmclassification
[24] Zhao, Y., Li, M., Lai, L., Suda, N., Civin, D., & Chandra, V. (2018). Federated learning with non-iid data. arXiv preprint arXiv:1806.00582.
[25] 維基百科:gRPC https://en.wikipedia.org/wiki/GRPC
描述 碩士
國立政治大學
資訊科學系碩士在職專班
109971023
資料來源 http://thesis.lib.nccu.edu.tw/record/#G0109971023
資料類型 thesis
dc.contributor.advisor 廖文宏zh_TW
dc.contributor.advisor Liao, Wen-Hungen_US
dc.contributor.author (Authors) 林書羽zh_TW
dc.contributor.author (Authors) Lin, Shu-Yuen_US
dc.creator (作者) 林書羽zh_TW
dc.creator (作者) Lin, Shu-Yuen_US
dc.date (日期) 2023en_US
dc.date.accessioned 6-Jul-2023 16:22:37 (UTC+8)-
dc.date.available 6-Jul-2023 16:22:37 (UTC+8)-
dc.date.issued (上傳時間) 6-Jul-2023 16:22:37 (UTC+8)-
dc.identifier (Other Identifiers) G0109971023en_US
dc.identifier.uri (URI) http://nccur.lib.nccu.edu.tw/handle/140.119/145743-
dc.description (描述) 碩士zh_TW
dc.description (描述) 國立政治大學zh_TW
dc.description (描述) 資訊科學系碩士在職專班zh_TW
dc.description (描述) 109971023zh_TW
dc.description.abstract (摘要) 聯盟式學習是一種能夠有效解決機器學習中面臨資料隱私與資料分散問題的新興機器學習技術,參與的客戶端能在保有自己資料隱私的前提下,聯合訓練以共享知識。
本論文探討在圖像分類任務中,聯盟式學習對於模型訓練的效益,並與傳統的集中式訓練相互比照分析。藉由將資料集模擬生成為獨立同分布和非獨立同分佈的配置方式,以及搭配多個協作單位數量的組合,觀察資料平衡性分布差異對於聯盟式學習的執行效益影響程度,而在非獨立同分佈方面,還特別討論了類別無交集的分配方式。
本研究透過深度學習方法,分別以搭載預訓練模型和重新訓練模型之方式,綜合討論單位數量的多寡和分佈特性,並以Top-1準確率和Top-5準確率評估聯合訓練之成果。
實驗結果顯示,聯合訓練的初始權重設定有著關鍵的影響性,隨機權重會使得模型表現較不穩定,而基準相同的權重則表現穩定且具有較為良好的準確率。此外,依據不同的資料配置方式,模型表現也會有所不同,其中獨立同分布的表現最佳,而非獨立同分佈中的不平衡分配次之、無交集分配最不理想。
zh_TW
dc.description.abstract (摘要) Federated learning is an emerging machine learning technique that can effectively solve the problems of data privacy and data dispersion in machine learning, where the participating clients can share knowledge through joint training while maintaining the privacy of their own data.
This thesis explores the benefits of federated learning in model training for image classification tasks and compares it with traditional centralized training. By simulating datasets with independent identical distribution (IID) and non-independent identical distribution (non-IID), and varying the number of collaborating units, we observe how differences in training sample distribution affect the effectiveness of federated learning. Specifically, we discuss the special situation of non-intersecting classes in the case of non-independent identical distribution.
Using deep learning methods with both pre-trained and trained-from-scratch models, this study comprehensively discusses the impact of the number and distribution of units and evaluates the results of joint training based on Top-1 and Top-5 accuracy.
Experimental results show that the initial weight setting of joint training has a critical impact. Random weights lead to unstable model performance, while weights set based on the same criteria yield stable and more accurate results. Additionally, model performance varies depending on characteristics of data distribution. The performance of federated-learning model trained with independent identical distribution samples is the best, followed by imbalanced distribution in non-independent identical distribution, while non-intersecting class allocation is the least ideal.
en_US
dc.description.tableofcontents 摘要 i
目錄 iii
圖目錄 vi
表目錄 viii
第一章 緒論 1
1.1 研究背景與動機 1
1.2 研究目的 1
1.3 論文架構 2
第二章 相關研究與技術背景 4
2.1 聯盟式學習發展背景與技術 4
2.1.1 聯盟式學習之模式 5
2.1.2 聯盟式學習演算法 9
2.1.3 聯盟式學習與分散式學習 11
2.2 資料分布之平衡性 12
2.3 深度學習之圖像分類 14
2.3.1 深度學習技術原理 14
2.3.2 電腦視覺之圖像分類 17
2.4 評估指標 21
2.4.1 混淆矩陣 21
2.4.2 準確率 22
2.4.3 精確率 22
2.4.4 召回率 22
2.4.5 F1-score 23
2.4.6 Top-k Accuracy 23
第三章 研究方法 24
3.1 基本構想 24
3.2 前期研究 24
3.2.1 分類圖像資料集 25
3.2.2 資料前處理與增強 26
3.2.3 聯盟式學習之框架 26
3.2.4 圖像分類模型 27
3.3 研究架構設計 31
3.3.1 問題陳述 31
3.3.2 研究架構 31
3.4 目標設定 33
第四章 研究過程與實驗結果分析 34
4.1 實驗環境 34
4.2 研究過程 35
4.2.1 資料集分布型態之生成 35
4.2.2 資料處理與強化 36
4.2.3 集中式訓練模型基準設定 36
4.2.4 聯盟式學習訓練策略設定 39
4.2.5 聯盟式學習訓練-預訓練模型 41
4.2.6 聯盟式學習訓練-模型重新訓練 42
4.2.7 重新隨機取樣驗證實驗 47
4.2.8 驗證不同資料集於聯盟式訓練之表現結果:Tiny-IMG128實驗 48
4.3 研究結果分析 49
第五章 結論與未來研究方向 51
5.1 結論 51
5.2 未來研究方向 52
參考文獻 53
zh_TW
dc.format.extent 5749660 bytes-
dc.format.mimetype application/pdf-
dc.source.uri (資料來源) http://thesis.lib.nccu.edu.tw/record/#G0109971023en_US
dc.subject (關鍵詞) 聯盟式學習zh_TW
dc.subject (關鍵詞) 圖像分類zh_TW
dc.subject (關鍵詞) 深度學習zh_TW
dc.subject (關鍵詞) 資料平衡性zh_TW
dc.subject (關鍵詞) Federated Learningen_US
dc.subject (關鍵詞) Image Classificationen_US
dc.subject (關鍵詞) Deep Learningen_US
dc.subject (關鍵詞) Data Balanceen_US
dc.title (題名) 訓練樣本分布對聯盟式學習成效之影響評估zh_TW
dc.title (題名) Evaluating the Performance of Federated Learning Across Different Training Sample Distributionsen_US
dc.type (資料類型) thesisen_US
dc.relation.reference (參考文獻) [1] McMahan, B., Moore, E., Ramage, D., Hampson, S., & y Arcas, B. A. (2017, April). Communication-efficient learning of deep networks from decentralized data. In Artificial intelligence and statistics (pp. 1273-1282). PMLR.
[2] Yang, Q., Liu, Y., Chen, T., & Tong, Y. (2019). Federated machine learning: Concept and applications. ACM Transactions on Intelligent Systems and Technology (TIST), 10(2), 1-19.
[3] 維基百科:聯盟式學習定義。
https://en.wikipedia.org/wiki/Federated_learning
[4] Liu, Y., Kang, Y., Xing, C., Chen, T., & Yang, Q. (2020). A secure federated transfer learning framework. IEEE Intelligent Systems, 35(4), 70-82.
[5] Gentry, C. (2009, May). Fully homomorphic encryption using ideal lattices. In Proceedings of the forty-first annual ACM symposium on Theory of computing (pp. 169-178).
[6] Ho, Q., Cipar, J., Cui, H., Lee, S., Kim, J. K., Gibbons, P. B., ... & Xing, E. P. (2013). More effective distributed ml via a stale synchronous parallel parameter server. Advances in neural information processing systems, 26.
[7] Kairouz, P., McMahan, H. B., Avent, B., Bellet, A., Bennis, M., Bhagoji, A. N., ... & Zhao, S. (2021). Advances and open problems in federated learning. Foundations and Trends® in Machine Learning, 14(1–2), 1-210.
[8] Li, Q., Diao, Y., Chen, Q., & He, B. (2022, May). Federated learning on non-iid data silos: An experimental study. In 2022 IEEE 38th International Conference on Data Engineering (ICDE) (pp. 965-978). IEEE.
[9] ILSVRC歷年Top-5錯誤率
https://www.kaggle.com/getting-started/149448
[10] CIFAR-10 / CIFAR-100 資料集
https://www.cs.toronto.edu/~kriz/cifar.html
[11] Caltech-UCSD Birds-200-2011 資料集
https://paperswithcode.com/dataset/cub-200-2011
[12] LeCun, Y., Bottou, L., Bengio, Y., & Haffner, P. (1998). Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11), 2278-2324.
[13] MNIST 資料集
http://yann.lecun.com/exdb/mnist/
[14] Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2017). Imagenet classification with deep convolutional neural networks. Communications of the ACM, 60(6), 84-90.
[15] ImageNet 資料集
https://www.image-net.org/
[16] Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., ... & Rabinovich, A. (2015). Going deeper with convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1-9).
[17] He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770-778).
[18] Tiny-ImageNet 資料集
https://www.kaggle.com/c/tiny-imagenet/overview
[19] Beutel, D. J., Topal, T., Mathur, A., Qiu, X., Parcollet, T., de Gusmão, P. P., & Lane, N. D. (2020). Flower: A friendly federated learning research framework. arXiv preprint arXiv:2007.14390.
[20] Flower Framework
https://flower.dev/
[21] Tan, M., & Le, Q. (2019, May). Efficientnet: Rethinking model scaling for convolutional neural networks. In International conference on machine learning (pp. 6105-6114). PMLR.
[22] Tan, M., & Le, Q. (2021, July). Efficientnetv2: Smaller models and faster training. In International conference on machine learning (pp. 10096-10106). PMLR.
[23] OpenMMLab MMClassification github
https://github.com/open-mmlab/mmclassification
[24] Zhao, Y., Li, M., Lai, L., Suda, N., Civin, D., & Chandra, V. (2018). Federated learning with non-iid data. arXiv preprint arXiv:1806.00582.
[25] 維基百科:gRPC https://en.wikipedia.org/wiki/GRPC
zh_TW