學術產出-Theses

Article View/Open

Publication Export

Google ScholarTM

政大圖書館

Citation Infomation

  • No doi shows Citation Infomation
題名 基於可逆半色調技術的對抗例防禦機制探討
Adversarial Defense Mechanism Using Reversible Halftoning Techniques
作者 于振升
Yu, Zhen-Sheng
貢獻者 廖文宏
Liao, Wen-Hung
于振升
Yu, Zhen-Sheng
關鍵詞 半色調轉化還原
圖像分類
深度學習
視覺轉換器
Reversible Halftoning
Image Classification
Deep Learning
Vision Transformers
日期 2024
上傳時間 3-Jun-2024 11:42:42 (UTC+8)
摘要 對抗例是指對機器學習模型的一種攻擊手法,目的是使模型在輸入上產生誤差,導致模型誤分類或產生錯誤的輸出。對抗例攻擊是機器學習和深度學習中一個重要的安全問題,因為這些攻擊可能導致模型在實際應用中的失效。 本論文探討在對抗例攻擊的案例中,使用可逆半色調轉換(Reversible Halftoning)受攻擊過後的圖片對於抵抗攻擊結果的效益,並與傳統的擴散抖動演算法(Floyd-Steinberg dithering)相互比較分析。利用不同的深度學習模型比較,藉由將受攻擊過後的資料集使用不同的圖像處理法,並分別進行多次的迭代,觀察各種圖像處理法針對受過攻擊的圖片在不同的迭代次數下抵銷攻擊造成影響的程度,以便在圖像處理法本身的資訊損失跟對抗例攻擊中間尋找平衡點,期能維持模型最佳的辨識度。 本研究透過深度學習方法,分別以傳統神經網路模型ResNet和CvT視覺轉換模型之方式,綜合討論各種不同方法處理過的對抗例圖片,並以Top-1準確率和Top-5準確率評估防禦攻擊之成果。實驗結果顯示,使用可逆半色調還原技術將受攻擊的圖片轉換,會相較比使用傳統的擴散抖動演算法處理過後的圖片更能消去對抗例攻擊之影響。此外,依據不同深度學習網路模型,遭受對抗例的表現也會有所不同,其中使用CvT視覺轉換模型以本身已經可以針對防禦上有不錯的表現,而傳統神經網路模型(ResNet)則明顯在受過攻擊的圖片之辨識率上會降低非常多,此時使用本論文研究之可逆半色調還原技術去處理圖片對於準確度提升有大的幫助。
Adversarial examples refer to a technique used to attack machine learning models, aiming to introduce errors in the model's inputs, leading to misclassification or erroneous outputs. Adversarial attacks and defenses represent crucial security issues in the fields of machine learning and deep learning, as these attacks can cause models to fail in practical applications. This thesis explores the benefits of using reversible halftoning transformation, specifically reversible halftoning, on images after being attacked, to resist the effects of such attacks. A comparative analysis is conducted with the traditional Floyd-Steinberg dithering algorithm. Various deep learning models are compared by applying different image processing techniques to datasets after being attacked. Iterations are performed to observe the extent to which different image processing techniques can counteract the impact of attacks on the attacked images at different iteration counts, aiming to find a balance between information loss in image processing and defense against adversarial attacks, thereby maintaining the optimal recognition performance of the models. This study employs deep learning methods, utilizing both the conventional neural network model ResNet and the CvT visual transformer model. Various methods for processing attacked images are comprehensively discussed, and the defense against attacks is evaluated based on Top-1 and Top-5 accuracy rates. Experimental results indicate that using reversible halftoning restoration techniques to transform attacked images can more effectively mitigate the impact of adversarial attacks compared to using the traditional Floyd-Steinberg dithering algorithm. Additionally, the performance of models under adversarial attacks varies depending on different deep learning network models. While the CvT model exhibits good performance in defense, the conventional neural network model (ResNet) experiences a significant decrease in recognition accuracy on attacked images. In such cases, employing the reversible halftoning restoration technique proposed in this thesis proves to be greatly beneficial for improving accuracy.
參考文獻 [1] 維基百科:深度學習架構: https://zh.wikipedia.org/zh-tw/%E6%B7%B1%E5%BA%A6%E5%AD%A6%E4%B9%A0 [2] Y. LeCun; B. Boser; J. S. Denker; D. Henderson; R. E. Howard. (1989). Backpropagation Applied to Handwritten Zip Code on IEEE Intelligent Systems, 541-555. [3] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun Microsoft Research。Identity Mappings in Deep Residual Networks arXiv:1603.05027v3 [cs.CV] 25 Jul 2016 [4] AlexeyDosovitskiy, Lucas,Beyer, AlexanderKolesnikov, DirkWeissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby (2020). An Image is Worth 16x16Words:TransformersforImageRecognitionat Scale. arXiv:2010.11929 [cs.CV] [5] Haiping Wu, Bin Xiao, Noel Codella, Mengchen Liu, Xiyang Dai, Lu Yuan, Lei Zhang (2021). CvT: Introducing Convolutions to Vision Transformers, arXiv:2103.15808 [6] Ian J. Goodfellow, Jonathon Shlens, Christian Szegedy. (2014). Explaining and Harnessing Adversarial Examples. arXiv:1412.6572 [stat.ML] [7] Ian J. Goodfellow, Jonathon Shlens, Christian Szegedy. (2014). Explaining and Harnessing Adversarial Examples. (pp.3 -5). [8] Ian J. Goodfellow, Jonathon Shlens, Christian Szegedy. (2014). Explaining and Harnessing Adversarial Examples. (pp.5 -7) [9] Papernot, Nicolas, Patrick McDaniel, Somesh Jha, Matt Fredrikson, Z. Berkay Celik, and Ananthram Swami. "The limitations of deep learning in adversarial settings." In Security and Privacy (EuroS&P), 2016 IEEE European Symposium on, pp. 372-387. IEEE, 2016 [10] Nicholas Carlini,David Wagner Towards Evaluating the Robustness of Neural Networks [D] 10.1109/SP(2017) [11] Naveed Akhtar, Ajmal Mian.(2018) Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey. arXiv:1801.00553 [cs.CV] [12] Nicolas Papernot, Patrick McDaniel, Ian Goodfellow, Somesh Jha, Z. Berkay Celik, Ananthram SwamiPractical Black-Box Attacks against Machine Learning(pp.4 -7) [13] 對抗防禦之對抗樣本檢測:Feature Squeezing https://www.cnblogs.com/hickey2048/p/15136348.html [14] Joachim Folz , Sebastian Palacio , Joern Hees, Damian Borth, and Andreas Dengel.(2020)Adversarial Defense based on Structure-to-Signal Autoencoders .2020 IEEE Winter Conference on Applications of Computer Vision (WACV) [15] 維基百科:半色調 https://zh.wikipedia.org/zh-tw/%E5%8D%8A%E8%89%B2%E8%AA%BF [16] 印刷第四課:印前基礎-網點 https://jeseinfini.com/2016/02/21/%E5%8D%B0%E5%88%B7%E7%AC%AC%E5%9B%9B%E8%AA%B2%EF%BC%9A%E5%8D%B0%E5%89%8D%E5%9F%BA%E7%A4%8E-%E7%B6%B2%E9%BB%9E/ [17] Huang, Chen-Wei, Liao, Wen-Hung. (2021). Defense Mechanism Against Adversarial Attacks Using Density-based Representation of Images, 5-8. [18] 維基百科: Floyd–Steinberg dithering https://en.wikipedia.org/wiki/Floyd%E2%80%93Steinberg_dithering/ [19] Menghan Xia, Wenbo Hu, Xueting Liu, Tien-Tsin Wong(2021) Deep Halftoning With Reversible Binary Pattern. IEEE/CVF International Conference on Computer Vision (ICCV), 2021 (pp. 14000-14009). [20] ImageNet 資料集 https://www.image-net.org/. [21] Tiny-ImageNet 資料集 https://www.kaggle.com/c/tiny-imagenet/overview [22] ImageNet-100 資料集 ehttps://www.kaggle.com/datasets/ambityga/imagenet100. [23] 維基百科:均方誤差 https://zh.wikipedia.org/zh-tw/%E5%9D%87%E6%96%B9%E8%AF%AF%E5%B7%AE [24] 維基百科: 峰值訊噪比. https://zh.wikipedia.org/zh-tw/%E5%B3%B0%E5%80%BC%E4%BF%A1%E5%99%AA%E6%AF%94 [25] 維基百科: 結構相似性 https://zh.wikipedia.org/zh-tw/%E7%B5%90%E6%A7%8B%E7%9B%B8%E4%BC%BC%E6%80%A7
描述 碩士
國立政治大學
資訊科學系碩士在職專班
108971016
資料來源 http://thesis.lib.nccu.edu.tw/record/#G0108971016
資料類型 thesis
dc.contributor.advisor 廖文宏zh_TW
dc.contributor.advisor Liao, Wen-Hungen_US
dc.contributor.author (Authors) 于振升zh_TW
dc.contributor.author (Authors) Yu, Zhen-Shengen_US
dc.creator (作者) 于振升zh_TW
dc.creator (作者) Yu, Zhen-Shengen_US
dc.date (日期) 2024en_US
dc.date.accessioned 3-Jun-2024 11:42:42 (UTC+8)-
dc.date.available 3-Jun-2024 11:42:42 (UTC+8)-
dc.date.issued (上傳時間) 3-Jun-2024 11:42:42 (UTC+8)-
dc.identifier (Other Identifiers) G0108971016en_US
dc.identifier.uri (URI) https://nccur.lib.nccu.edu.tw/handle/140.119/151503-
dc.description (描述) 碩士zh_TW
dc.description (描述) 國立政治大學zh_TW
dc.description (描述) 資訊科學系碩士在職專班zh_TW
dc.description (描述) 108971016zh_TW
dc.description.abstract (摘要) 對抗例是指對機器學習模型的一種攻擊手法,目的是使模型在輸入上產生誤差,導致模型誤分類或產生錯誤的輸出。對抗例攻擊是機器學習和深度學習中一個重要的安全問題,因為這些攻擊可能導致模型在實際應用中的失效。 本論文探討在對抗例攻擊的案例中,使用可逆半色調轉換(Reversible Halftoning)受攻擊過後的圖片對於抵抗攻擊結果的效益,並與傳統的擴散抖動演算法(Floyd-Steinberg dithering)相互比較分析。利用不同的深度學習模型比較,藉由將受攻擊過後的資料集使用不同的圖像處理法,並分別進行多次的迭代,觀察各種圖像處理法針對受過攻擊的圖片在不同的迭代次數下抵銷攻擊造成影響的程度,以便在圖像處理法本身的資訊損失跟對抗例攻擊中間尋找平衡點,期能維持模型最佳的辨識度。 本研究透過深度學習方法,分別以傳統神經網路模型ResNet和CvT視覺轉換模型之方式,綜合討論各種不同方法處理過的對抗例圖片,並以Top-1準確率和Top-5準確率評估防禦攻擊之成果。實驗結果顯示,使用可逆半色調還原技術將受攻擊的圖片轉換,會相較比使用傳統的擴散抖動演算法處理過後的圖片更能消去對抗例攻擊之影響。此外,依據不同深度學習網路模型,遭受對抗例的表現也會有所不同,其中使用CvT視覺轉換模型以本身已經可以針對防禦上有不錯的表現,而傳統神經網路模型(ResNet)則明顯在受過攻擊的圖片之辨識率上會降低非常多,此時使用本論文研究之可逆半色調還原技術去處理圖片對於準確度提升有大的幫助。zh_TW
dc.description.abstract (摘要) Adversarial examples refer to a technique used to attack machine learning models, aiming to introduce errors in the model's inputs, leading to misclassification or erroneous outputs. Adversarial attacks and defenses represent crucial security issues in the fields of machine learning and deep learning, as these attacks can cause models to fail in practical applications. This thesis explores the benefits of using reversible halftoning transformation, specifically reversible halftoning, on images after being attacked, to resist the effects of such attacks. A comparative analysis is conducted with the traditional Floyd-Steinberg dithering algorithm. Various deep learning models are compared by applying different image processing techniques to datasets after being attacked. Iterations are performed to observe the extent to which different image processing techniques can counteract the impact of attacks on the attacked images at different iteration counts, aiming to find a balance between information loss in image processing and defense against adversarial attacks, thereby maintaining the optimal recognition performance of the models. This study employs deep learning methods, utilizing both the conventional neural network model ResNet and the CvT visual transformer model. Various methods for processing attacked images are comprehensively discussed, and the defense against attacks is evaluated based on Top-1 and Top-5 accuracy rates. Experimental results indicate that using reversible halftoning restoration techniques to transform attacked images can more effectively mitigate the impact of adversarial attacks compared to using the traditional Floyd-Steinberg dithering algorithm. Additionally, the performance of models under adversarial attacks varies depending on different deep learning network models. While the CvT model exhibits good performance in defense, the conventional neural network model (ResNet) experiences a significant decrease in recognition accuracy on attacked images. In such cases, employing the reversible halftoning restoration technique proposed in this thesis proves to be greatly beneficial for improving accuracy.en_US
dc.description.tableofcontents 摘要 i 目錄 iv 圖目錄 vii 表目錄 ix 第一章 緒論 1 1.1 研究背景與動機 1 1.2 研究目的 2 1.3 論文架構 3 第二章 相關研究與技術背景 4 2.1 深度學習技術及架構 4 2.1.1 CNN模型與殘差神經網路ResNet 5 2.1.2 Vision Transformer(ViT) 8 2.1.3 Convolutional Vision Transformers (CvT) 11 2.2 對抗例攻擊 (Adversarial Attacks) 14 2.2.1 攻擊手段 16 2.2.2 防禦手段 21 2.3 半色調 24 2.3.1 半色調種類 25 2.3.2 半色調轉換的對抗例防禦機制 27 2.4 半色調還原 28 2.4.1 擴散抖動演算法(Floyd–Steinberg dithering) 29 2.4.2 可逆半色調技術(Reversible Halftoning) 30 2.5 小結 31 第三章 研究方法 33 3.1 資料集介紹 33 3.1.1 ImageNet 33 3.1.2 Tiny-ImageNet 33 3.1.3 ImageNet100 34 3.2 半色調轉換還原圖像品質檢測 34 3.2.1 均方誤差(Mean Square Error,MSE) 34 3.2.2 峰值信噪比(Peak Signal Noise Ratio,PSNR) 35 3.2.3 結構相似性(Structural Similarity Index,SSIM) 36 3.3 研究架構 37 3.3.1 研究設計 37 3.3.2 模型訓練 37 3.3.3 訓練資料迭代及對抗例影響 38 第四章 研究過程與實驗結果分析 40 4.1 實驗環境 40 4.2 研究過程 41 4.2.1 資料集選擇及處理 41 4.2.2 模型訓練 43 4.2.3 原始資料半色調轉換還原測試 45 4.2.4 對抗例測試-原始圖像 46 4.2.5 對抗例測試-半色調圖像 47 4.2.6 對抗例測試-半色調轉換與還原 48 4.2.7 對抗例測試-半色調轉換還原迭代影響 50 4.2.8 對抗例測試-混合資料模型 52 4.3 研究結果分析 54 第五章 結論與未來研究方向 58 5.1 結論 58 5.2 未來研究方向 58 參考文獻 60zh_TW
dc.format.extent 2874069 bytes-
dc.format.mimetype application/pdf-
dc.source.uri (資料來源) http://thesis.lib.nccu.edu.tw/record/#G0108971016en_US
dc.subject (關鍵詞) 半色調轉化還原zh_TW
dc.subject (關鍵詞) 圖像分類zh_TW
dc.subject (關鍵詞) 深度學習zh_TW
dc.subject (關鍵詞) 視覺轉換器zh_TW
dc.subject (關鍵詞) Reversible Halftoningen_US
dc.subject (關鍵詞) Image Classificationen_US
dc.subject (關鍵詞) Deep Learningen_US
dc.subject (關鍵詞) Vision Transformersen_US
dc.title (題名) 基於可逆半色調技術的對抗例防禦機制探討zh_TW
dc.title (題名) Adversarial Defense Mechanism Using Reversible Halftoning Techniquesen_US
dc.type (資料類型) thesisen_US
dc.relation.reference (參考文獻) [1] 維基百科:深度學習架構: https://zh.wikipedia.org/zh-tw/%E6%B7%B1%E5%BA%A6%E5%AD%A6%E4%B9%A0 [2] Y. LeCun; B. Boser; J. S. Denker; D. Henderson; R. E. Howard. (1989). Backpropagation Applied to Handwritten Zip Code on IEEE Intelligent Systems, 541-555. [3] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun Microsoft Research。Identity Mappings in Deep Residual Networks arXiv:1603.05027v3 [cs.CV] 25 Jul 2016 [4] AlexeyDosovitskiy, Lucas,Beyer, AlexanderKolesnikov, DirkWeissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby (2020). An Image is Worth 16x16Words:TransformersforImageRecognitionat Scale. arXiv:2010.11929 [cs.CV] [5] Haiping Wu, Bin Xiao, Noel Codella, Mengchen Liu, Xiyang Dai, Lu Yuan, Lei Zhang (2021). CvT: Introducing Convolutions to Vision Transformers, arXiv:2103.15808 [6] Ian J. Goodfellow, Jonathon Shlens, Christian Szegedy. (2014). Explaining and Harnessing Adversarial Examples. arXiv:1412.6572 [stat.ML] [7] Ian J. Goodfellow, Jonathon Shlens, Christian Szegedy. (2014). Explaining and Harnessing Adversarial Examples. (pp.3 -5). [8] Ian J. Goodfellow, Jonathon Shlens, Christian Szegedy. (2014). Explaining and Harnessing Adversarial Examples. (pp.5 -7) [9] Papernot, Nicolas, Patrick McDaniel, Somesh Jha, Matt Fredrikson, Z. Berkay Celik, and Ananthram Swami. "The limitations of deep learning in adversarial settings." In Security and Privacy (EuroS&P), 2016 IEEE European Symposium on, pp. 372-387. IEEE, 2016 [10] Nicholas Carlini,David Wagner Towards Evaluating the Robustness of Neural Networks [D] 10.1109/SP(2017) [11] Naveed Akhtar, Ajmal Mian.(2018) Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey. arXiv:1801.00553 [cs.CV] [12] Nicolas Papernot, Patrick McDaniel, Ian Goodfellow, Somesh Jha, Z. Berkay Celik, Ananthram SwamiPractical Black-Box Attacks against Machine Learning(pp.4 -7) [13] 對抗防禦之對抗樣本檢測:Feature Squeezing https://www.cnblogs.com/hickey2048/p/15136348.html [14] Joachim Folz , Sebastian Palacio , Joern Hees, Damian Borth, and Andreas Dengel.(2020)Adversarial Defense based on Structure-to-Signal Autoencoders .2020 IEEE Winter Conference on Applications of Computer Vision (WACV) [15] 維基百科:半色調 https://zh.wikipedia.org/zh-tw/%E5%8D%8A%E8%89%B2%E8%AA%BF [16] 印刷第四課:印前基礎-網點 https://jeseinfini.com/2016/02/21/%E5%8D%B0%E5%88%B7%E7%AC%AC%E5%9B%9B%E8%AA%B2%EF%BC%9A%E5%8D%B0%E5%89%8D%E5%9F%BA%E7%A4%8E-%E7%B6%B2%E9%BB%9E/ [17] Huang, Chen-Wei, Liao, Wen-Hung. (2021). Defense Mechanism Against Adversarial Attacks Using Density-based Representation of Images, 5-8. [18] 維基百科: Floyd–Steinberg dithering https://en.wikipedia.org/wiki/Floyd%E2%80%93Steinberg_dithering/ [19] Menghan Xia, Wenbo Hu, Xueting Liu, Tien-Tsin Wong(2021) Deep Halftoning With Reversible Binary Pattern. IEEE/CVF International Conference on Computer Vision (ICCV), 2021 (pp. 14000-14009). [20] ImageNet 資料集 https://www.image-net.org/. [21] Tiny-ImageNet 資料集 https://www.kaggle.com/c/tiny-imagenet/overview [22] ImageNet-100 資料集 ehttps://www.kaggle.com/datasets/ambityga/imagenet100. [23] 維基百科:均方誤差 https://zh.wikipedia.org/zh-tw/%E5%9D%87%E6%96%B9%E8%AF%AF%E5%B7%AE [24] 維基百科: 峰值訊噪比. https://zh.wikipedia.org/zh-tw/%E5%B3%B0%E5%80%BC%E4%BF%A1%E5%99%AA%E6%AF%94 [25] 維基百科: 結構相似性 https://zh.wikipedia.org/zh-tw/%E7%B5%90%E6%A7%8B%E7%9B%B8%E4%BC%BC%E6%80%A7zh_TW