dc.contributor | 圖檔所 | |
dc.creator (作者) | 羅崇銘 | |
dc.creator (作者) | Lo, Chung-Ming;Chang, Yu-Chi;Chen, Yi-Kong;Wu, Ping-Hsun;Luh, Hsing | |
dc.date (日期) | 2024-06 | |
dc.date.accessioned | 2024-11-15 | - |
dc.date.available | 2024-11-15 | - |
dc.date.issued (上傳時間) | 2024-11-15 | - |
dc.identifier.uri (URI) | https://nccur.lib.nccu.edu.tw/handle/140.119/154268 | - |
dc.description.abstract (摘要) | The global death rate of chronic kidney disease (CKD) continues to increase and becomes a serious health issue. Ultrasound imaging is significant in the evaluation of CKD. However, there is a challenge posed by quality differences in multi-center datasets for kidney ultrasound image segmentation. Confronting the problem, this study applied the W-Net based on the double U-Net architecture which was respectively trained in two stages. In the first stage, the pixel-wise nnU-Net was pretrained by 4586 images and fine-tuned by 534 images. In the second stage, the region-wise nnU-Net was trained from the inference of the first stage by 72 images and achieved a 6.95% improvement from the first stage. It can bring more evidence about the practical application of deep learning-based segmentation in kidney ultrasound and its potential use in clinics. | |
dc.format.extent | 107 bytes | - |
dc.format.mimetype | text/html | - |
dc.relation (關聯) | 2024 IEEE Conference on Artificial Intelligence, IEEE | |
dc.subject (關鍵詞) | W-Net; kidney; ultrasound; segmentation; multicenter | |
dc.title (題名) | W-Net: two-stage segmentation for multi-center kidney ultrasound | |
dc.type (資料類型) | conference | |
dc.identifier.doi (DOI) | 10.1109/CAI59869.2024.00274 | |
dc.doi.uri (DOI) | https://doi.org/10.1109/CAI59869.2024.00274 | |