Publications-Periodical Articles

Article View/Open

Publication Export

Google ScholarTM

NCCU Library

Citation Infomation

Related Publications in TAIR

題名 Interactive content-based image retrieval with deep learning for CT abdominal organ recognition
作者 羅崇銘
Lo, Chung-Ming;Wang, Chi-Cheng;Hung, Peng-Hsiang
貢獻者 圖檔所
關鍵詞 abdominal CT; content-based image retrieval; deep learning; vision transformer
日期 2024-02
上傳時間 29-Apr-2024 13:19:20 (UTC+8)
摘要 Objective. Recognizing the most relevant seven organs in an abdominal computed tomography (CT) slice requires sophisticated knowledge. This study proposed automatically extracting relevant features and applying them in a content-based image retrieval (CBIR) system to provide similar evidence for clinical use. Approach. A total of 2827 abdominal CT slices, including 638 liver, 450 stomach, 229 pancreas, 442 spleen, 362 right kidney, 424 left kidney and 282 gallbladder tissues, were collected to evaluate the proposed CBIR in the present study. Upon fine-tuning, high-level features used to automatically interpret the differences among the seven organs were extracted via deep learning architectures, including DenseNet, Vision Transformer (ViT), and Swin Transformer v2 (SwinViT). Three images with different annotations were employed in the classification and query. Main results. The resulting performances included the classification accuracy (94%–99%) and retrieval result (0.98–0.99). Considering global features and multiple resolutions, SwinViT performed better than ViT. ViT also benefited from a better receptive field to outperform DenseNet. Additionally, the use of hole images can obtain almost perfect results regardless of which deep learning architectures are used. Significance. The experiment showed that using pretrained deep learning architectures and fine-tuning with enough data can achieve successful recognition of seven abdominal organs. The CBIR system can provide more convincing evidence for recognizing abdominal organs via similarity measurements, which could lead to additional possibilities in clinical practice.
關聯 Physics in Medicine & Biology, Vol.69, No.4, 045004
資料類型 article
DOI https://doi.org/10.1088/1361-6560/ad1f86
dc.contributor 圖檔所-
dc.creator (作者) 羅崇銘-
dc.creator (作者) Lo, Chung-Ming;Wang, Chi-Cheng;Hung, Peng-Hsiang-
dc.date (日期) 2024-02-
dc.date.accessioned 29-Apr-2024 13:19:20 (UTC+8)-
dc.date.available 29-Apr-2024 13:19:20 (UTC+8)-
dc.date.issued (上傳時間) 29-Apr-2024 13:19:20 (UTC+8)-
dc.identifier.uri (URI) https://nccur.lib.nccu.edu.tw/handle/140.119/150955-
dc.description.abstract (摘要) Objective. Recognizing the most relevant seven organs in an abdominal computed tomography (CT) slice requires sophisticated knowledge. This study proposed automatically extracting relevant features and applying them in a content-based image retrieval (CBIR) system to provide similar evidence for clinical use. Approach. A total of 2827 abdominal CT slices, including 638 liver, 450 stomach, 229 pancreas, 442 spleen, 362 right kidney, 424 left kidney and 282 gallbladder tissues, were collected to evaluate the proposed CBIR in the present study. Upon fine-tuning, high-level features used to automatically interpret the differences among the seven organs were extracted via deep learning architectures, including DenseNet, Vision Transformer (ViT), and Swin Transformer v2 (SwinViT). Three images with different annotations were employed in the classification and query. Main results. The resulting performances included the classification accuracy (94%–99%) and retrieval result (0.98–0.99). Considering global features and multiple resolutions, SwinViT performed better than ViT. ViT also benefited from a better receptive field to outperform DenseNet. Additionally, the use of hole images can obtain almost perfect results regardless of which deep learning architectures are used. Significance. The experiment showed that using pretrained deep learning architectures and fine-tuning with enough data can achieve successful recognition of seven abdominal organs. The CBIR system can provide more convincing evidence for recognizing abdominal organs via similarity measurements, which could lead to additional possibilities in clinical practice.-
dc.format.extent 104 bytes-
dc.format.mimetype text/html-
dc.relation (關聯) Physics in Medicine & Biology, Vol.69, No.4, 045004-
dc.subject (關鍵詞) abdominal CT; content-based image retrieval; deep learning; vision transformer-
dc.title (題名) Interactive content-based image retrieval with deep learning for CT abdominal organ recognition-
dc.type (資料類型) article-
dc.identifier.doi (DOI) 10.1088/1361-6560/ad1f86-
dc.doi.uri (DOI) https://doi.org/10.1088/1361-6560/ad1f86-