學術產出-Theses

Article View/Open

Publication Export

Google ScholarTM

政大圖書館

Citation Infomation

題名 基於孿生網絡之正則化對比式遷移學習於醫療影像
Contrastive Transfer Learning for Regularization with Triplet Network on Medical Imaging
作者 游勤葑
Yu, Chin-Feng
貢獻者 邱淑怡
Chiu, Shu-I
游勤葑
Yu, Chin-Feng
關鍵詞 黃斑部病變
對比式學習
遷移式學習
正則化
Macular degeneration
Contrastive learning
Transfer learning
Regularization
日期 2022
上傳時間 5-Oct-2022 09:15:57 (UTC+8)
摘要 在此篇論文中,我們針對眼底攝影 ( Color Fundus Photography)醫療影像提出了一個新穎的遷移式學習架構,稱為基於孿生網絡之正則化對比式遷移學習(Contrastive Transfer Learning for Regularization with Triplet Network),CTLRT,在 CTLRT 中包含三種對比式正則化損失項且結合了遷移式學習的骨架,我們在三種眼底攝影資料集且多種遷移式學習骨架下表明 CTLRT 不只擁有比傳統的遷移式學習更高的準確
度,並且透過我們設計的對比式正則化損失減緩複雜模型帶來的過擬
合效應,提高了模型的泛化能力,且經由可視化模型關注的區域說明
了 CTLRT 確實能正確的關注變病的區域。
This paper focuses on Color Fundus Photography and proposes a novel transfer learning architecture called Contrastive Transfer Learning for Regularization with Triplet Network (CTLRT). CTLRT contains three kinds of contrastive regularization loss terms and combines the backbone of transfer learning. We use three fundus photography datasets and multiple transfer backbones. The following shows that CTLRT not only has higher accuracy than traditional transfer learning but also mitigates the overfitting effect brought by complex models through our designed contrastive regularization
loss and improves the model’s generalization ability. Visualizing the area where model interest shows that CTLRT correctly focuses on the diseased site.
參考文獻 Agarap, A. F. (2018). Deep learning using rectified linear units (relu). arXiv preprint
arXiv:1803.08375.
Bromley, J., Guyon, I., LeCun, Y., Säckinger, E., and Shah, R. (1993). Signature verifi-
cation using a” siamese” time delay neural network. Advances in neural information
processing systems, 6.
Chakraborty, R. and Pramanik, A. (2022). Dcnn-based prediction model for detection
of age-related macular degeneration from color fundus images. Medical & Biological
Engineering & Computing, 60(5):1431–1448.
Chen, T., Kornblith, S., Norouzi, M., and Hinton, G. (2020a). A simple framework for
contrastive learning of visual representations. In International conference on machine
learning, pages 1597–1607. PMLR.
Chen, T.-C., Lim, W. S., Wang, V. Y., Ko, M.-L., Chiu, S.-I., Huang, Y.-S., Lai, F., Yang,
C.-M., Hu, F.-R., Jang, J.-S. R., et al. (2021). Artificial intelligence–assisted early detec-
tion of retinitis pigmentosa—the most common inherited retinal degeneration. Journal
of Digital Imaging, 34(4):948–958.
Chen, X., Fan, H., Girshick, R., and He, K. (2020b). Improved baselines with momentum
contrastive learning. arXiv preprint arXiv:2003.04297.
Chen, X. and He, K. (2021). Exploring simple siamese representation learning. In Pro-
ceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,
pages 15750–15758.
Chollet, F. (2017). Xception: Deep learning with depthwise separable convolutions. In
Proceedings of the IEEE conference on computer vision and pattern recognition, pages
1251–1258.
Dataset, B. R. O.-A. (2019). ichallenge-amd.
Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., and Fei-Fei, L. (2009a). Imagenet: A
large-scale hierarchical image database. In 2009 IEEE Conference on Computer Vision
and Pattern Recognition, pages 248–255.
Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., and Fei-Fei, L. (2009b). ImageNet: A
Large-Scale Hierarchical Image Database. In CVPR09.
Farnell, D. J., Hatfield, F. N., Knox, P., Reakes, M., Spencer, S., Parry, D., and Harding, S. P. (2008). Enhancement of blood vessels in digital fundus photographs via the appli-
cation of multiscale line operators. Journal of the Franklin institute, 345(7):748–765.
Fukushima, K. and Miyake, S. (1982). Neocognitron: A new algorithm for pattern recog-
nition tolerant of deformations and shifts in position. Pattern recognition, 15(6):455–
469.
Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C.,
Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al. (2020). Bootstrap your own latent-a
new approach to self-supervised learning. Advances in Neural Information Processing
Systems, 33:21271–21284.
Hadsell, R., Chopra, S., and LeCun, Y. (2006). Dimensionality reduction by learning an
invariant mapping. In 2006 IEEE Computer Society Conference on Computer Vision
and Pattern Recognition (CVPR’06), volume 2, pages 1735–1742. IEEE.
He, K., Zhang, X., Ren, S., and Sun, J. (2016). Deep residual learning for image recogni-
tion. In Proceedings of the IEEE conference on computer vision and pattern recogni-
tion, pages 770–778.
Huang, G., Liu, Z., Van Der Maaten, L., and Weinberger, K. Q. (2017). Densely connected
convolutional networks. In Proceedings of the IEEE conference on computer vision and
pattern recognition, pages 4700–4708.
Hubel, D. H. and Wiesel, T. N. (1962). Receptive fields, binocular interaction and func-
tional architecture in the cat’s visual cortex. The Journal of physiology, 160(1):106.
Krizhevsky, A., Hinton, G., et al. (2009). Learning multiple layers of features from tiny
images. Technical report, Citeseer.
Krizhevsky, A., Sutskever, I., and Hinton, G. E. (2012). Imagenet classification with deep
convolutional neural networks. In Advances in neural information processing systems,
pages 1097–1105.
LeCun, Y., Boser, B., Denker, J., Henderson, D., Howard, R., Hubbard, W., and Jackel, L.
(1989). Handwritten digit recognition with a back-propagation network. Advances in neural information processing systems, 2.
LeCun, Y., Bottou, L., Bengio, Y., Haffner, P., et al. (1998). Gradient-based learning
applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324.
Netzer, Y., Wang, T., Coates, A., Bissacco, A., Wu, B., and Ng, A. Y. (2011). Reading
digits in natural images with unsupervised feature learning.
Pan, S. J. and Yang, Q. (2009). A survey on transfer learning. IEEE Transactions on
knowledge and data engineering, 22(10):1345–1359.
Raghu, M., Zhang, C., Kleinberg, J., and Bengio, S. (2019). Transfusion: Understand-
ing transfer learning for medical imaging. Advances in neural information processing
systems, 32.
Robinson, J., Chuang, C.-Y., Sra, S., and Jegelka, S. (2020). Contrastive learning with
hard negative samples. arXiv preprint arXiv:2010.04592.
Rumelhart, D. E., Hinton, G. E., and Williams, R. J. (1986). Learning representations by
back-propagating errors. nature, 323(6088):533–536.
Selvaraju, R. R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., and Batra, D. (2017). Grad-cam: Visual explanations from deep networks via gradient-based localization. In
Proceedings of the IEEE international conference on computer vision, pages 618–626.
Shorten, C. and Khoshgoftaar, T. M. (2019). A survey on image data augmentation for
deep learning. Journal of Big Data, 6(1):60.
Smith, R. (2007). An overview of the tesseract ocr engine. In Ninth international confer-
ence on document analysis and recognition (ICDAR 2007), volume 2, pages 629–633.
IEEE.
Szegedy, C., Ioffe, S., Vanhoucke, V., and Alemi, A. A. (2017). Inception-v4, inception- resnet and the impact of residual connections on learning. In AAAI, volume 4, page 12.
Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., and Van-
houcke (2015). Going deeper with convolutions. In Proceedings of the IEEE conference
on computer vision and pattern recognition, pages 1–9.
Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., and Wojna, Z. (2016). Rethinking the
inception architecture for computer vision. In Proceedings of the IEEE conference on
computer vision and pattern recognition, pages 2818–2826.
Torrey, L. and Shavlik, J. (2010). Transfer learning. In Handbook of research on machine
learning applications and trends: algorithms, methods, and techniques, pages 242–264.
IGI global.
Yu, Y., Chen, X., Zhu, X., Zhang, P., Hou, Y., Zhang, R., and Wu, C. (2020). Performance
of deep transfer learning for detecting abnormal fundus images. Journal of Current
Ophthalmology, 32(4):368.
Zhai, X., Oliver, A., Kolesnikov, A., and Beyer, L. (2019). S4l: Self-supervised semi-
supervised learning. In Proceedings of the IEEE/CVF International Conference on
Computer Vision, pages 1476–1485.
描述 碩士
國立政治大學
資訊科學系
110753205
資料來源 http://thesis.lib.nccu.edu.tw/record/#G0110753205
資料類型 thesis
dc.contributor.advisor 邱淑怡zh_TW
dc.contributor.advisor Chiu, Shu-Ien_US
dc.contributor.author (Authors) 游勤葑zh_TW
dc.contributor.author (Authors) Yu, Chin-Fengen_US
dc.creator (作者) 游勤葑zh_TW
dc.creator (作者) Yu, Chin-Fengen_US
dc.date (日期) 2022en_US
dc.date.accessioned 5-Oct-2022 09:15:57 (UTC+8)-
dc.date.available 5-Oct-2022 09:15:57 (UTC+8)-
dc.date.issued (上傳時間) 5-Oct-2022 09:15:57 (UTC+8)-
dc.identifier (Other Identifiers) G0110753205en_US
dc.identifier.uri (URI) http://nccur.lib.nccu.edu.tw/handle/140.119/142128-
dc.description (描述) 碩士zh_TW
dc.description (描述) 國立政治大學zh_TW
dc.description (描述) 資訊科學系zh_TW
dc.description (描述) 110753205zh_TW
dc.description.abstract (摘要) 在此篇論文中,我們針對眼底攝影 ( Color Fundus Photography)醫療影像提出了一個新穎的遷移式學習架構,稱為基於孿生網絡之正則化對比式遷移學習(Contrastive Transfer Learning for Regularization with Triplet Network),CTLRT,在 CTLRT 中包含三種對比式正則化損失項且結合了遷移式學習的骨架,我們在三種眼底攝影資料集且多種遷移式學習骨架下表明 CTLRT 不只擁有比傳統的遷移式學習更高的準確
度,並且透過我們設計的對比式正則化損失減緩複雜模型帶來的過擬
合效應,提高了模型的泛化能力,且經由可視化模型關注的區域說明
了 CTLRT 確實能正確的關注變病的區域。
zh_TW
dc.description.abstract (摘要) This paper focuses on Color Fundus Photography and proposes a novel transfer learning architecture called Contrastive Transfer Learning for Regularization with Triplet Network (CTLRT). CTLRT contains three kinds of contrastive regularization loss terms and combines the backbone of transfer learning. We use three fundus photography datasets and multiple transfer backbones. The following shows that CTLRT not only has higher accuracy than traditional transfer learning but also mitigates the overfitting effect brought by complex models through our designed contrastive regularization
loss and improves the model’s generalization ability. Visualizing the area where model interest shows that CTLRT correctly focuses on the diseased site.
en_US
dc.description.tableofcontents 摘要 i
Abstract ii
目錄 iii
圖目錄 v
表目錄 vii
第 一章 緒論 1
1.1 研究背景與動機 1
1.2 研究問題與目的 3
1.3 論文架構 5
第 二章 文獻探討 6
2.1 深度卷積神經網絡 6
2.2 深度卷積神經網路與醫療影像 7
2.3 遷移式學習與醫療影像 7
2.4 資料增強 8
2.5 對比式學習 10
2.6 自監督式學習 12
2.6.1 探索簡單的孿生表達學習 12
第 三章 研究方法 14
3.1 基於孿生網絡之正則化對比式遷移學習 14
3.2 光學文字辨識 23
第 四章 實驗分析 24
4.1 資料集 24
4.2 實驗設定及超參數設定 25
4.3 損失函數的訓練過程 26
4.4 CTLRT 以 Xception 為骨架 30
4.5 CTLRT 以 InceptionV3 為骨架 32
4.6 CTLRT 以 DenseNet201 為骨架 35
4.7 三種骨架之評估總結 38
4.8 可視化模型關注區域 39
4.9 ARIA and iChallenge-AMD 42
第 五章 結論與未來展望 44
5.1 結論 44
5.2 未來展望 44
參考文獻 46
zh_TW
dc.format.extent 55359093 bytes-
dc.format.mimetype application/pdf-
dc.source.uri (資料來源) http://thesis.lib.nccu.edu.tw/record/#G0110753205en_US
dc.subject (關鍵詞) 黃斑部病變zh_TW
dc.subject (關鍵詞) 對比式學習zh_TW
dc.subject (關鍵詞) 遷移式學習zh_TW
dc.subject (關鍵詞) 正則化zh_TW
dc.subject (關鍵詞) Macular degenerationen_US
dc.subject (關鍵詞) Contrastive learningen_US
dc.subject (關鍵詞) Transfer learningen_US
dc.subject (關鍵詞) Regularizationen_US
dc.title (題名) 基於孿生網絡之正則化對比式遷移學習於醫療影像zh_TW
dc.title (題名) Contrastive Transfer Learning for Regularization with Triplet Network on Medical Imagingen_US
dc.type (資料類型) thesisen_US
dc.relation.reference (參考文獻) Agarap, A. F. (2018). Deep learning using rectified linear units (relu). arXiv preprint
arXiv:1803.08375.
Bromley, J., Guyon, I., LeCun, Y., Säckinger, E., and Shah, R. (1993). Signature verifi-
cation using a” siamese” time delay neural network. Advances in neural information
processing systems, 6.
Chakraborty, R. and Pramanik, A. (2022). Dcnn-based prediction model for detection
of age-related macular degeneration from color fundus images. Medical & Biological
Engineering & Computing, 60(5):1431–1448.
Chen, T., Kornblith, S., Norouzi, M., and Hinton, G. (2020a). A simple framework for
contrastive learning of visual representations. In International conference on machine
learning, pages 1597–1607. PMLR.
Chen, T.-C., Lim, W. S., Wang, V. Y., Ko, M.-L., Chiu, S.-I., Huang, Y.-S., Lai, F., Yang,
C.-M., Hu, F.-R., Jang, J.-S. R., et al. (2021). Artificial intelligence–assisted early detec-
tion of retinitis pigmentosa—the most common inherited retinal degeneration. Journal
of Digital Imaging, 34(4):948–958.
Chen, X., Fan, H., Girshick, R., and He, K. (2020b). Improved baselines with momentum
contrastive learning. arXiv preprint arXiv:2003.04297.
Chen, X. and He, K. (2021). Exploring simple siamese representation learning. In Pro-
ceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,
pages 15750–15758.
Chollet, F. (2017). Xception: Deep learning with depthwise separable convolutions. In
Proceedings of the IEEE conference on computer vision and pattern recognition, pages
1251–1258.
Dataset, B. R. O.-A. (2019). ichallenge-amd.
Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., and Fei-Fei, L. (2009a). Imagenet: A
large-scale hierarchical image database. In 2009 IEEE Conference on Computer Vision
and Pattern Recognition, pages 248–255.
Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., and Fei-Fei, L. (2009b). ImageNet: A
Large-Scale Hierarchical Image Database. In CVPR09.
Farnell, D. J., Hatfield, F. N., Knox, P., Reakes, M., Spencer, S., Parry, D., and Harding, S. P. (2008). Enhancement of blood vessels in digital fundus photographs via the appli-
cation of multiscale line operators. Journal of the Franklin institute, 345(7):748–765.
Fukushima, K. and Miyake, S. (1982). Neocognitron: A new algorithm for pattern recog-
nition tolerant of deformations and shifts in position. Pattern recognition, 15(6):455–
469.
Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C.,
Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al. (2020). Bootstrap your own latent-a
new approach to self-supervised learning. Advances in Neural Information Processing
Systems, 33:21271–21284.
Hadsell, R., Chopra, S., and LeCun, Y. (2006). Dimensionality reduction by learning an
invariant mapping. In 2006 IEEE Computer Society Conference on Computer Vision
and Pattern Recognition (CVPR’06), volume 2, pages 1735–1742. IEEE.
He, K., Zhang, X., Ren, S., and Sun, J. (2016). Deep residual learning for image recogni-
tion. In Proceedings of the IEEE conference on computer vision and pattern recogni-
tion, pages 770–778.
Huang, G., Liu, Z., Van Der Maaten, L., and Weinberger, K. Q. (2017). Densely connected
convolutional networks. In Proceedings of the IEEE conference on computer vision and
pattern recognition, pages 4700–4708.
Hubel, D. H. and Wiesel, T. N. (1962). Receptive fields, binocular interaction and func-
tional architecture in the cat’s visual cortex. The Journal of physiology, 160(1):106.
Krizhevsky, A., Hinton, G., et al. (2009). Learning multiple layers of features from tiny
images. Technical report, Citeseer.
Krizhevsky, A., Sutskever, I., and Hinton, G. E. (2012). Imagenet classification with deep
convolutional neural networks. In Advances in neural information processing systems,
pages 1097–1105.
LeCun, Y., Boser, B., Denker, J., Henderson, D., Howard, R., Hubbard, W., and Jackel, L.
(1989). Handwritten digit recognition with a back-propagation network. Advances in neural information processing systems, 2.
LeCun, Y., Bottou, L., Bengio, Y., Haffner, P., et al. (1998). Gradient-based learning
applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324.
Netzer, Y., Wang, T., Coates, A., Bissacco, A., Wu, B., and Ng, A. Y. (2011). Reading
digits in natural images with unsupervised feature learning.
Pan, S. J. and Yang, Q. (2009). A survey on transfer learning. IEEE Transactions on
knowledge and data engineering, 22(10):1345–1359.
Raghu, M., Zhang, C., Kleinberg, J., and Bengio, S. (2019). Transfusion: Understand-
ing transfer learning for medical imaging. Advances in neural information processing
systems, 32.
Robinson, J., Chuang, C.-Y., Sra, S., and Jegelka, S. (2020). Contrastive learning with
hard negative samples. arXiv preprint arXiv:2010.04592.
Rumelhart, D. E., Hinton, G. E., and Williams, R. J. (1986). Learning representations by
back-propagating errors. nature, 323(6088):533–536.
Selvaraju, R. R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., and Batra, D. (2017). Grad-cam: Visual explanations from deep networks via gradient-based localization. In
Proceedings of the IEEE international conference on computer vision, pages 618–626.
Shorten, C. and Khoshgoftaar, T. M. (2019). A survey on image data augmentation for
deep learning. Journal of Big Data, 6(1):60.
Smith, R. (2007). An overview of the tesseract ocr engine. In Ninth international confer-
ence on document analysis and recognition (ICDAR 2007), volume 2, pages 629–633.
IEEE.
Szegedy, C., Ioffe, S., Vanhoucke, V., and Alemi, A. A. (2017). Inception-v4, inception- resnet and the impact of residual connections on learning. In AAAI, volume 4, page 12.
Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., and Van-
houcke (2015). Going deeper with convolutions. In Proceedings of the IEEE conference
on computer vision and pattern recognition, pages 1–9.
Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., and Wojna, Z. (2016). Rethinking the
inception architecture for computer vision. In Proceedings of the IEEE conference on
computer vision and pattern recognition, pages 2818–2826.
Torrey, L. and Shavlik, J. (2010). Transfer learning. In Handbook of research on machine
learning applications and trends: algorithms, methods, and techniques, pages 242–264.
IGI global.
Yu, Y., Chen, X., Zhu, X., Zhang, P., Hou, Y., Zhang, R., and Wu, C. (2020). Performance
of deep transfer learning for detecting abnormal fundus images. Journal of Current
Ophthalmology, 32(4):368.
Zhai, X., Oliver, A., Kolesnikov, A., and Beyer, L. (2019). S4l: Self-supervised semi-
supervised learning. In Proceedings of the IEEE/CVF International Conference on
Computer Vision, pages 1476–1485.
zh_TW
dc.identifier.doi (DOI) 10.6814/NCCU202201567en_US