Publications-Proceedings

Article View/Open

Publication Export

Google ScholarTM

NCCU Library

Citation Infomation

Related Publications in TAIR

題名 Evaluating the Performance of Federated Learning Across Different Training Sample Distributions
作者 廖文宏
Liao, Wen-Hung;Lin, Shu-Yu;Wu, Yi-Chieh
貢獻者 資訊系
關鍵詞 Federated learning; sample distribution; deep neural networks
日期 2024-01
上傳時間 7-Jan-2025 09:36:41 (UTC+8)
摘要 This research investigates how the distribution of samples impacts the performance of federated learning. By simulating datasets with independent identical distribution (IID) and nonindependent identical distribution (non-IID), and varying the number of collaborating units, we observe how differences in training sample distribution affect the effectiveness of federated learning. Specifically, we discuss the special situation of nonintersecting classes in the case of non-independent identical distribution. Using deep learning methods with both pretrained and trained-from-scratch models, this study comprehensively discusses the impact of the number and distribution of units and evaluates the results of joint training based on Top-1 and Top-5 accuracy. Experimental results show that the initial weight setting of joint training has a critical impact. Random weights lead to unstable model performance, while weights set based on the same criteria yield stable and more accurate results. Additionally, model performance varies depending on characteristics of data distribution. The performance of federated-learning model trained with independent identical distribution samples is the best, followed by imbalanced distribution in non-independent identical distribution, while non-intersecting class allocation is the least ideal.
關聯 Proceedings of the 18th International Conference on Ubiquitous Information Management and Communication (IMCOM), IEEE SMC Society
資料類型 conference
DOI https://doi.org/10.1109/IMCOM60618.2024.10418352
dc.contributor 資訊系
dc.creator (作者) 廖文宏
dc.creator (作者) Liao, Wen-Hung;Lin, Shu-Yu;Wu, Yi-Chieh
dc.date (日期) 2024-01
dc.date.accessioned 7-Jan-2025 09:36:41 (UTC+8)-
dc.date.available 7-Jan-2025 09:36:41 (UTC+8)-
dc.date.issued (上傳時間) 7-Jan-2025 09:36:41 (UTC+8)-
dc.identifier.uri (URI) https://nccur.lib.nccu.edu.tw/handle/140.119/155077-
dc.description.abstract (摘要) This research investigates how the distribution of samples impacts the performance of federated learning. By simulating datasets with independent identical distribution (IID) and nonindependent identical distribution (non-IID), and varying the number of collaborating units, we observe how differences in training sample distribution affect the effectiveness of federated learning. Specifically, we discuss the special situation of nonintersecting classes in the case of non-independent identical distribution. Using deep learning methods with both pretrained and trained-from-scratch models, this study comprehensively discusses the impact of the number and distribution of units and evaluates the results of joint training based on Top-1 and Top-5 accuracy. Experimental results show that the initial weight setting of joint training has a critical impact. Random weights lead to unstable model performance, while weights set based on the same criteria yield stable and more accurate results. Additionally, model performance varies depending on characteristics of data distribution. The performance of federated-learning model trained with independent identical distribution samples is the best, followed by imbalanced distribution in non-independent identical distribution, while non-intersecting class allocation is the least ideal.
dc.format.extent 112 bytes-
dc.format.mimetype text/html-
dc.relation (關聯) Proceedings of the 18th International Conference on Ubiquitous Information Management and Communication (IMCOM), IEEE SMC Society
dc.subject (關鍵詞) Federated learning; sample distribution; deep neural networks
dc.title (題名) Evaluating the Performance of Federated Learning Across Different Training Sample Distributions
dc.type (資料類型) conference
dc.identifier.doi (DOI) 10.1109/IMCOM60618.2024.10418352
dc.doi.uri (DOI) https://doi.org/10.1109/IMCOM60618.2024.10418352