Publications-Theses
Article View/Open
Publication Export
-
Google ScholarTM
NCCU Library
Citation Infomation
Related Publications in TAIR
題名 基於奇異值分解的醫學影像深度學習隱私保護機制
A Privacy-Preserving Mechanism for Medical Image Deep Learning Based on Singular Value Decomposition作者 王思詠
Wang, Sih-Yong貢獻者 左瑞麟
Tso, Ray-Lin
王思詠
Wang, Sih-Yong關鍵詞 醫學影像混淆
隱私保護深度學習
奇異值分解
隨機投影
對抗重建攻擊
Medical Image Obfuscation
Privacy-Preserving Deep Learning
Singular Value Decomposition
Random Projection
Adversarial Reconstruction日期 2025 上傳時間 1-Jul-2025 15:49:44 (UTC+8) 摘要 儘管深度學習在醫學影像分析領域已取得顯著的成功,但由於模型容易遭受 重建攻擊(Reconstruction Attacks)及反轉攻擊(Inversion Attacks),醫療影像的隱私保護問題仍然嚴峻。為了解決這些潛在的安全威脅,本研究提出一個新穎且高效的多階段混淆框架,以實現醫學影像深度學習的隱私保護,並同時確保模型效能。本方法透過奇異值分解(Singular Value Decomposition, SVD),結合量化 (Quantization)、高斯雜訊注入(Gaussian Noise Injection)與隨機投影(Random Projection),有效破壞影像的統計與結構特徵,從而降低惡意重建的風險。有別於以往基於 SVD 或主成分分析(PCA)的混淆方案,這些方法仍保留可被攻擊者利用的可識別元件,本研究之設計能夠有效混淆敏感影像特徵,同時維持較高的模型準確度。透過腦腫瘤 MRI 與多囊性卵巢症候群(PCOS)超音波影像資料集進行實驗評估,結果顯示經混淆後之影像使用ResNet50、MobileNetV2 與 InceptionV3 模型進行分類時,仍可達到 85% 以上的分類準確度。此外,本方法亦顯著降低影像之可重建性(Reconstructability),在包括領導位元攻擊(Leading Bit Attack, LBA)與最小差異攻擊(Minimum Difference Attack, MDA)在內的對抗環境中,結構相似性指標(SSIM)均低於 0.07,且峰值訊噪比(PSNR)呈負值。本研究之成果有效彌合資料安全與臨床人工智慧效能之間的鴻溝,為醫學影像領域的隱私保護深度學習提供一種可擴展、輕量且具備攻擊韌性之解決方案。
Despite the remarkable success of deep learning in medical image analysis, serious privacy concerns persist due to model susceptibility to reconstruction and inversion attacks. To address these vulnerabilities, we propose a novel and efficient multistage obfuscation framework that enables privacy-preserving deep learning on medical images without compromising utility. Our method applies Singular Value Decomposition (SVD) by integrating quantization, Gaussian noise injection, and random projection to disrupt both statistical and structural patterns, thereby mitigating adversarial reconstruction risks. Unlike prior SVD-based or PCA-based schemes, which retain identifiable components exploitable by attackers, our design achieves strong obfuscation of sensitive image features while maintaining high model accuracy. Experimental results on brain tumor MRI and PCOS ultrasound datasets show that obfuscated images processed with ResNet50, MobileNetV2, and InceptionV3 maintain over 85% classification accuracy. Furthermore, our framework significantly reduces reconstructability, achieving SSIM scores below 0.07 and negative PSNR in adversarial settings, including Leading Bit and Minimum Difference attacks. This study bridges the gap between data security and clinical AI utility, offering a scalable, lightweight, and attack-resilient solution for privacy-preserving deep learning in medical imaging.參考文獻 [1] M. Antón-Rodríguez, F. J. Díaz-Pernas, D. González-Ortega, and M. Martínez-Zarzuela. “A Deep Learning Approach for Brain Tumor Classification and Segmentation Using a Multiscale Convolutional Neural Network.” In: Healthcare 9.2 (Feb. 2021), p. 153 (cit. p. 1). [2] S. Arora, S. Gupta, Y. Huang, K. Li, and Z. Song. “Evaluating Gradient Inversion Attacks and Defenses in Federated Learning.” In: arXiv preprint arXiv:2112.00059 (2021) (cit. p. 1). [3] H. et al. Chen. “Developing Privacy-Preserving AI Systems: The Lessons Learned.” In: Proceedings of the 57th ACM/IEEE Design Automation Conference (DAC). San Francisco, CA, USA, 2020, pp. 1–4 (cit. p. 2). [4] A. Choudhari. PCOS Detection Using Ultrasound Images. Kaggle. 2022 (cit. pp. 13, 14). [5] M. Fredrikson, S. Jha, and T. Ristenpart. “Model Inversion Attacks That Exploit Confidence Information and Basic Countermeasures.” In: Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security (CCS). Denver, CO, USA, Oct. 2015, pp. 1322–1333 (cit. p. 1). [6] H. Greenspan, B. van Ginneken, and R. M. Summers. “Guest Editorial: Deep Learning in Medical Imaging: Overview and Future Promise of an Exciting New Technique.” In: IEEE Transactions on Medical Imaging 35.5 (May 2016), pp. 1153–1159 (cit. p. 1). [7] A. et al. Hatamizadeh. “Do Gradient Inversion Attacks Make Federated Learning Unsafe?” In: IEEE Transactions on Medical Imaging 42.7 (July 2023), pp. 2044–2056 (cit. p. 1). [8] K. He, X. Zhang, S. Ren, and J. Sun. “Deep Residual Learning for Image Recognition.” In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Las Vegas, NV, USA, 2016, pp. 770–778 (cit. pp. 13, 14). [9] Georgios A. Kaissis, Marcus R. Makowski, Daniel Rückert, and Robert Braren. “Secure, Privacy-Preserving and Federated Machine Learning in Medical Imaging.” In: Nature Machine Intelligence 2 (May 2020), pp. 305–311 (cit. p. 1). [10] S. Kwatra and V. Torra. “Data Reconstruction Attack Against Principal Component Analysis.” In: Proceedings of the International Conference on Security, Privacy and Anonymity in Social Networks and Big Data (SocialSec). Vol. 14097. Lecture Notes in Computer Science. Singapore: Springer, 2023, pp. 108–123 (cit. p. 2). [11] J.-W. et al. Lee. “Privacy-Preserving Machine Learning with Fully Homomorphic Encryption for Deep Neural Network.” In: IEEE Access 10 (2022), pp. 30039–30054 (cit. p. 2). [12] I. D. Mienye, T. G. Swart, G. Obaido, M. Jordan, and P. Ilono. “Deep Convolutional Neural Networks in Medical Image Analysis: A Review.” In: Information 16.3 (2025), p. 195 (cit. p. 1). [13] M. Nickparvar. Brain Tumor Classification Dataset. Kaggle. May 2023 (cit. pp. 13, 14). [14] A. B. Popescu, C. I. Nita, I. A. Taca, A. Vizitiu, and L. M. Itu. “Privacy-Preserving Medical Image Classification through Deep Learning and Matrix Decomposition.” In: Proceedings of the 31st Mediterranean Conference on Control and Automation (MED). Limassol, Cyprus, 2023, pp. 305–310 (cit. pp. 2, 17). [15] C. Qian, M. Zhang, Y. Nie, S. Lu, and H. Cao. “A Survey of Bit-Flip Attacks on Deep Neural Networks and Corresponding Defense Methods.” In: Electronics 12.4 (2023), p. 853 (cit. p. 25). [16] M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L.-C. Chen. “MobileNetV2: Inverted Residuals and Linear Bottlenecks.” In: Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Salt Lake City, UT, USA, 2018, pp. 4510–4520 (cit. pp. 13, 15). [17] N. Subbanna, M. Wilms, A. Tuladhar, and N. D. Forkert. “An Analysis of the Vulnerability of Two Common Deep Learning-Based Medical Image Segmentation Techniques to Model Inversion Attacks.” In: Sensors 21.11 (2021), p. 3874 (cit. p. 23). [18] Nagesh Subbanna, Matthias Wilms, Anup Tuladhar, and Nils D. Forkert. “An Analysis of the Vulnerability of Two Common Deep Learning-Based Medical Image Segmentation Techniques to Model Inversion Attacks.” In: Sensors 21.11 (2021), p. 3874 (cit. p. 21). [19] T. Suimon, Y. Koizumi, J. Takemasa, and T. Hasegawa. “A Data Reconstruction Attack Against Vertical Federated Learning Based on Knowledge Transfer.” In: Proceedings of IEEE INFOCOM Workshops (INFOCOM WKSHPS). Vancouver, BC, Canada, 2024, pp. 1–6 (cit. pp. 2, 7). [20] Xiaoxiao Sun, Nidham Gazagnadou, Vivek Sharma, et al. Privacy Assessment on Reconstructed Images: Are Existing Evaluation Metrics Faithful to Human Perception? 2023. arXiv: 2309.13038 [cs.CV] (cit. p. 21). [21] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna. “Rethinking the Inception Architecture for Computer Vision.” In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Las Vegas, NV, USA, 2016, pp. 2818–2826 (cit. pp. 13, 15). [22] Y. Tian, S. Wang, J. Xiong, et al. “Robust and Privacy-Preserving Decentralized Deep Federated Learning Training: Focusing on Digital Healthcare Applications.” In: IEEE/ACM Transactions on Computational Biology and Bioinformatics 21.4 (July 2024), pp. 890–901 (cit. p. 1). [23] Z. Wang, M. Song, Z. Zhang, et al. “Beyond Inferring Class Representatives: UserLevel Privacy Leakage from Federated Learning.” In: Proceedings of IEEE INFOCOM. Paris, France, 2019 (cit. p. 1). [24] G. Xia, J. Chen, C. Yu, and J. Ma. “Poisoning Attacks in Federated Learning: A Survey.” In: IEEE Access 11 (2023), pp. 10708–10722 (cit. p. 1). 描述 碩士
國立政治大學
資訊安全碩士學位學程
112791003資料來源 http://thesis.lib.nccu.edu.tw/record/#G0112791003 資料類型 thesis dc.contributor.advisor 左瑞麟 zh_TW dc.contributor.advisor Tso, Ray-Lin en_US dc.contributor.author (Authors) 王思詠 zh_TW dc.contributor.author (Authors) Wang, Sih-Yong en_US dc.creator (作者) 王思詠 zh_TW dc.creator (作者) Wang, Sih-Yong en_US dc.date (日期) 2025 en_US dc.date.accessioned 1-Jul-2025 15:49:44 (UTC+8) - dc.date.available 1-Jul-2025 15:49:44 (UTC+8) - dc.date.issued (上傳時間) 1-Jul-2025 15:49:44 (UTC+8) - dc.identifier (Other Identifiers) G0112791003 en_US dc.identifier.uri (URI) https://nccur.lib.nccu.edu.tw/handle/140.119/157883 - dc.description (描述) 碩士 zh_TW dc.description (描述) 國立政治大學 zh_TW dc.description (描述) 資訊安全碩士學位學程 zh_TW dc.description (描述) 112791003 zh_TW dc.description.abstract (摘要) 儘管深度學習在醫學影像分析領域已取得顯著的成功,但由於模型容易遭受 重建攻擊(Reconstruction Attacks)及反轉攻擊(Inversion Attacks),醫療影像的隱私保護問題仍然嚴峻。為了解決這些潛在的安全威脅,本研究提出一個新穎且高效的多階段混淆框架,以實現醫學影像深度學習的隱私保護,並同時確保模型效能。本方法透過奇異值分解(Singular Value Decomposition, SVD),結合量化 (Quantization)、高斯雜訊注入(Gaussian Noise Injection)與隨機投影(Random Projection),有效破壞影像的統計與結構特徵,從而降低惡意重建的風險。有別於以往基於 SVD 或主成分分析(PCA)的混淆方案,這些方法仍保留可被攻擊者利用的可識別元件,本研究之設計能夠有效混淆敏感影像特徵,同時維持較高的模型準確度。透過腦腫瘤 MRI 與多囊性卵巢症候群(PCOS)超音波影像資料集進行實驗評估,結果顯示經混淆後之影像使用ResNet50、MobileNetV2 與 InceptionV3 模型進行分類時,仍可達到 85% 以上的分類準確度。此外,本方法亦顯著降低影像之可重建性(Reconstructability),在包括領導位元攻擊(Leading Bit Attack, LBA)與最小差異攻擊(Minimum Difference Attack, MDA)在內的對抗環境中,結構相似性指標(SSIM)均低於 0.07,且峰值訊噪比(PSNR)呈負值。本研究之成果有效彌合資料安全與臨床人工智慧效能之間的鴻溝,為醫學影像領域的隱私保護深度學習提供一種可擴展、輕量且具備攻擊韌性之解決方案。 zh_TW dc.description.abstract (摘要) Despite the remarkable success of deep learning in medical image analysis, serious privacy concerns persist due to model susceptibility to reconstruction and inversion attacks. To address these vulnerabilities, we propose a novel and efficient multistage obfuscation framework that enables privacy-preserving deep learning on medical images without compromising utility. Our method applies Singular Value Decomposition (SVD) by integrating quantization, Gaussian noise injection, and random projection to disrupt both statistical and structural patterns, thereby mitigating adversarial reconstruction risks. Unlike prior SVD-based or PCA-based schemes, which retain identifiable components exploitable by attackers, our design achieves strong obfuscation of sensitive image features while maintaining high model accuracy. Experimental results on brain tumor MRI and PCOS ultrasound datasets show that obfuscated images processed with ResNet50, MobileNetV2, and InceptionV3 maintain over 85% classification accuracy. Furthermore, our framework significantly reduces reconstructability, achieving SSIM scores below 0.07 and negative PSNR in adversarial settings, including Leading Bit and Minimum Difference attacks. This study bridges the gap between data security and clinical AI utility, offering a scalable, lightweight, and attack-resilient solution for privacy-preserving deep learning in medical imaging. en_US dc.description.tableofcontents 致謝 i 摘要 iii Abstract v Contents vii List of Figures ix List of Tables xi 1 Introduction 1 2 Methods and Materials 5 2.1 SVD for Image Obfuscation 5 2.2 Security Vulnerabilities of SVD Obfuscation 7 2.3 Privacy-Enhancing Modifications 7 2.4 Proposed Obfuscation Framework 10 3 Experiments and Results 13 3.1 Experimental Setup 13 3.2 Model Performance on Obfuscated Data 16 3.3 Effect of Retained Singular Components on Classification Performance 16 3.4 Visualization of the Obfuscation Process 18 3.5 Reconstruction Robustness Evaluation 18 3.6 Computational Efficiency Analysis 22 3.7 Analysis and Implications 23 3.8 Experimental Summary 23 4 Security Analysis Against Reconstruction Attacks 25 4.1 Leading Bit Attack 25 4.2 Minimum Difference Attack 27 5 Discussion and Conclusions 29 5.1 Discussion 29 5.2 Conclusions 31 References 33 zh_TW dc.format.extent 992034 bytes - dc.format.mimetype application/pdf - dc.source.uri (資料來源) http://thesis.lib.nccu.edu.tw/record/#G0112791003 en_US dc.subject (關鍵詞) 醫學影像混淆 zh_TW dc.subject (關鍵詞) 隱私保護深度學習 zh_TW dc.subject (關鍵詞) 奇異值分解 zh_TW dc.subject (關鍵詞) 隨機投影 zh_TW dc.subject (關鍵詞) 對抗重建攻擊 zh_TW dc.subject (關鍵詞) Medical Image Obfuscation en_US dc.subject (關鍵詞) Privacy-Preserving Deep Learning en_US dc.subject (關鍵詞) Singular Value Decomposition en_US dc.subject (關鍵詞) Random Projection en_US dc.subject (關鍵詞) Adversarial Reconstruction en_US dc.title (題名) 基於奇異值分解的醫學影像深度學習隱私保護機制 zh_TW dc.title (題名) A Privacy-Preserving Mechanism for Medical Image Deep Learning Based on Singular Value Decomposition en_US dc.type (資料類型) thesis en_US dc.relation.reference (參考文獻) [1] M. Antón-Rodríguez, F. J. Díaz-Pernas, D. González-Ortega, and M. Martínez-Zarzuela. “A Deep Learning Approach for Brain Tumor Classification and Segmentation Using a Multiscale Convolutional Neural Network.” In: Healthcare 9.2 (Feb. 2021), p. 153 (cit. p. 1). [2] S. Arora, S. Gupta, Y. Huang, K. Li, and Z. Song. “Evaluating Gradient Inversion Attacks and Defenses in Federated Learning.” In: arXiv preprint arXiv:2112.00059 (2021) (cit. p. 1). [3] H. et al. Chen. “Developing Privacy-Preserving AI Systems: The Lessons Learned.” In: Proceedings of the 57th ACM/IEEE Design Automation Conference (DAC). San Francisco, CA, USA, 2020, pp. 1–4 (cit. p. 2). [4] A. Choudhari. PCOS Detection Using Ultrasound Images. Kaggle. 2022 (cit. pp. 13, 14). [5] M. Fredrikson, S. Jha, and T. Ristenpart. “Model Inversion Attacks That Exploit Confidence Information and Basic Countermeasures.” In: Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security (CCS). Denver, CO, USA, Oct. 2015, pp. 1322–1333 (cit. p. 1). [6] H. Greenspan, B. van Ginneken, and R. M. Summers. “Guest Editorial: Deep Learning in Medical Imaging: Overview and Future Promise of an Exciting New Technique.” In: IEEE Transactions on Medical Imaging 35.5 (May 2016), pp. 1153–1159 (cit. p. 1). [7] A. et al. Hatamizadeh. “Do Gradient Inversion Attacks Make Federated Learning Unsafe?” In: IEEE Transactions on Medical Imaging 42.7 (July 2023), pp. 2044–2056 (cit. p. 1). [8] K. He, X. Zhang, S. Ren, and J. Sun. “Deep Residual Learning for Image Recognition.” In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Las Vegas, NV, USA, 2016, pp. 770–778 (cit. pp. 13, 14). [9] Georgios A. Kaissis, Marcus R. Makowski, Daniel Rückert, and Robert Braren. “Secure, Privacy-Preserving and Federated Machine Learning in Medical Imaging.” In: Nature Machine Intelligence 2 (May 2020), pp. 305–311 (cit. p. 1). [10] S. Kwatra and V. Torra. “Data Reconstruction Attack Against Principal Component Analysis.” In: Proceedings of the International Conference on Security, Privacy and Anonymity in Social Networks and Big Data (SocialSec). Vol. 14097. Lecture Notes in Computer Science. Singapore: Springer, 2023, pp. 108–123 (cit. p. 2). [11] J.-W. et al. Lee. “Privacy-Preserving Machine Learning with Fully Homomorphic Encryption for Deep Neural Network.” In: IEEE Access 10 (2022), pp. 30039–30054 (cit. p. 2). [12] I. D. Mienye, T. G. Swart, G. Obaido, M. Jordan, and P. Ilono. “Deep Convolutional Neural Networks in Medical Image Analysis: A Review.” In: Information 16.3 (2025), p. 195 (cit. p. 1). [13] M. Nickparvar. Brain Tumor Classification Dataset. Kaggle. May 2023 (cit. pp. 13, 14). [14] A. B. Popescu, C. I. Nita, I. A. Taca, A. Vizitiu, and L. M. Itu. “Privacy-Preserving Medical Image Classification through Deep Learning and Matrix Decomposition.” In: Proceedings of the 31st Mediterranean Conference on Control and Automation (MED). Limassol, Cyprus, 2023, pp. 305–310 (cit. pp. 2, 17). [15] C. Qian, M. Zhang, Y. Nie, S. Lu, and H. Cao. “A Survey of Bit-Flip Attacks on Deep Neural Networks and Corresponding Defense Methods.” In: Electronics 12.4 (2023), p. 853 (cit. p. 25). [16] M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L.-C. Chen. “MobileNetV2: Inverted Residuals and Linear Bottlenecks.” In: Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Salt Lake City, UT, USA, 2018, pp. 4510–4520 (cit. pp. 13, 15). [17] N. Subbanna, M. Wilms, A. Tuladhar, and N. D. Forkert. “An Analysis of the Vulnerability of Two Common Deep Learning-Based Medical Image Segmentation Techniques to Model Inversion Attacks.” In: Sensors 21.11 (2021), p. 3874 (cit. p. 23). [18] Nagesh Subbanna, Matthias Wilms, Anup Tuladhar, and Nils D. Forkert. “An Analysis of the Vulnerability of Two Common Deep Learning-Based Medical Image Segmentation Techniques to Model Inversion Attacks.” In: Sensors 21.11 (2021), p. 3874 (cit. p. 21). [19] T. Suimon, Y. Koizumi, J. Takemasa, and T. Hasegawa. “A Data Reconstruction Attack Against Vertical Federated Learning Based on Knowledge Transfer.” In: Proceedings of IEEE INFOCOM Workshops (INFOCOM WKSHPS). Vancouver, BC, Canada, 2024, pp. 1–6 (cit. pp. 2, 7). [20] Xiaoxiao Sun, Nidham Gazagnadou, Vivek Sharma, et al. Privacy Assessment on Reconstructed Images: Are Existing Evaluation Metrics Faithful to Human Perception? 2023. arXiv: 2309.13038 [cs.CV] (cit. p. 21). [21] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna. “Rethinking the Inception Architecture for Computer Vision.” In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Las Vegas, NV, USA, 2016, pp. 2818–2826 (cit. pp. 13, 15). [22] Y. Tian, S. Wang, J. Xiong, et al. “Robust and Privacy-Preserving Decentralized Deep Federated Learning Training: Focusing on Digital Healthcare Applications.” In: IEEE/ACM Transactions on Computational Biology and Bioinformatics 21.4 (July 2024), pp. 890–901 (cit. p. 1). [23] Z. Wang, M. Song, Z. Zhang, et al. “Beyond Inferring Class Representatives: UserLevel Privacy Leakage from Federated Learning.” In: Proceedings of IEEE INFOCOM. Paris, France, 2019 (cit. p. 1). [24] G. Xia, J. Chen, C. Yu, and J. Ma. “Poisoning Attacks in Federated Learning: A Survey.” In: IEEE Access 11 (2023), pp. 10708–10722 (cit. p. 1). zh_TW
