學術產出-Theses

Article View/Open

Publication Export

Google ScholarTM

政大圖書館

Citation Infomation

題名 透過高斯濾波強化卷積神經網路來阻擋 FGMS 對抗式攻擊
Robust Convolutional Neural Networks Through Gaussian Filter to Defend Against FGSM Adversarial Attacks
作者 陳彥宏
Chen, Yen-Hung
貢獻者 胡毓忠
Hu, Yuh-Jong
陳彥宏
Chen, Yen-Hung
關鍵詞 對抗式攻擊
穩健性
高斯濾波
去雜訊化
影像分類
卷積神經網路
Adversarial Attacks
Robustness
Gaussian Filter
Denoise
Image Classification
Convolutional Neural Network
日期 2022
上傳時間 2-Sep-2022 15:47:23 (UTC+8)
摘要 隨著硬體的進步,捲積神經網路 (CNN) 已經成功地被廣泛應用在 自動駕駛技術,用來偵測停止標或在路上的人們或車輛。根據這些偵 測的結果,車輛可以自動駕駛。但是,捲積神經網路的演算法卻有 缺陷,例如“停止”的標誌,加上一些干擾雜訊之後,可能就會被誤判 為“限速標誌”。這種行為稱之為“對抗式攻擊”。對抗式攻擊對於捲積 神經網路的應用產生了極大的風險。因此,對抗式防禦及增強捲積神 經網路的強韌性是兩個很具代表性的研究方向可以減低被攻擊的風 險,及增強人們對模型的信心。我們的論文中,提出一個方法來防止 對抗式攻擊。首先,在模型訓練階段,我們除了用原始的訓練資料去 訓練捲積神經網路,並且使用高斯濾波在原始訓練資料上,來產生新 的資料。尚加入這些新的訓練資料,可以強化捲積神經網路的強韌 性。在測試階段,我們在強化模型前面放置高斯濾波,將進來的資料 去雜訊,可以近一步強化模型的分類在面臨攻擊的準確度。
Convolutional Neural Network (CNN) has been successfully applied to the automobile industry because of hardware improvement. Auto-drive technology is used to detect stop signs, cars, or people on the road. According to the detection, the vehicle can be driven automatically. However, a “stop” sign can be changed to a “speed sign” when adding some noise. This action is called an “Adversarial Attack.” The adversarial attack makes an enormous risk on numerous applications. Hence, the adversarial defense has become an emerging topic of reducing the risk and increasing people’s confidence in the CNN model. In this study, we show a method to prevent the adversarial attack. We first train the original images in the training phase to enhance the CNN’s robustness. In addition, we add the Gaussian filtering images to enhance the training for the defense of the pictures. In the testing phase, we use a Gaussian filter to eliminate perturbations before feeding the image to the CNN model to increase its image classification accuracy.
參考文獻 [1] Behzadan, V. and Munir, A. (2017). Whatever does not kill deep reinforcement learning, makes it stronger. arXiv:1712.09344v1.
[2] Biggio, B. and Roli, F. (2018). Wild patterns: Ten years after the rise of adversarial machine learning. arXiv:1712.03141va2.
[3] Biggio,B.,Corona,I.,and Maiorca,D.,etal.(2017).Evasion attacks against machine learning at test time. arXiv:1708.06131.
[4] Biggio, B., Fumera, G., and Roli, F. (2014). Pattern recognition systems under attack: Design issues and research challenges. IJPRAI 28 (7).
[5] Biggio,B.,Nelson,B.,and Laskov,P. (2012). Poisoning attacks against support vector machines. in: 29th ICML.
[6] Carlini, N. and Wagner, D. (2017). Towards evaluating the robustness of neural networks. arXiv:1608.04644.
[7] Goodfellow, I. J., Shlens, J., and Szegedy, C. (2014). Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572.
[8] Gu, S. and Rigazio, L. (2014). Towards deep neural network architectures robust to adversarial examples. arXiv:1412.5068v4.
[9] Harder, P., Pfreundt, F.-J., and Keuper, M., et al. (2021). Spectral defense: Detecting adversarial attacks on cnns in the fourier domain. arXiv preprint arXiv:2103.03000.
[10] Ilahi, I., Usama, M., and Qadir, J., et al. (2020). Challenges and countermeasures for adversarial attacks on deep reinforcement learning. arXiv:2001.09684.
[11] Kos, J. and Song, D. (2017). Delving into adversarial attacks on deep policies,. arXiv:1705.06452.
[12] Krizhevsky, A. (2009). Learning multiple layers of features from tiny images. computer Science Department, University of Toronto, Tech. Rep.
[13] Krizhevsky,A., Sutskever,I., and Hinton,G.E (2017). Imagenet classification with deep convolutional neural networks,. Communications of the ACM 60.6: pp.84-90.
[14] Kurakin, A., Goodfellow, I., and Bengio, S. (2016). Adversarial examples in the physical world. arXiv:1607.02533.
[15] Lee,K.,Lee,K.,andLee,H.,etal.(2018).A simple unified framework for detecting out-of-distribution samples and adversarial attacks. arXiv:1807.03888.
[16] Li, B., Chen, C., and Wang, W., et al. (2019). Certified adversarial robustness with additive noise. arXiv:1809.03113v6.
[17] Li, Z., Feng, C., and Zheng, J., et al. (2020). Towards adversarial robustness via feature matching. IEEE.
[18] Lin,Y.-C.,Liu,M.-Y.,andSun,M.,etal.(2017). Detecting adversarial attacks on neural network policies with visual foresight. arXiv:1710.00814v1.
[19] Liu, A., Liu, X., and Zhang, C., et al. (2020). Training robust deep neural networks via adversarial noise propagation. arXiv:1909.09034v2.
[20] Ma, X., Li, B., and Wang, Y., et al. (2018). Characterizing adversarial subspaces using local intrinsic dimensionality. arXiv preprint arXiv:1801.02613.
[21] Madry, A., Makelov, A., and Schmidt, L., et al. (2019). Towards deep learning models resistant to adversarial attacks. arXiv:1706.06083.
[22] Muñoz-González,L.,Biggio,B.,and Demontis,A., etal.(2018).Towardspoisoning of deep learning algorithms with back-gradient optimization. in: AISec ’17, ACM, pp.27–38.
[23] Papernot, N., McDaniel, P., and Goodfellow, I., et al (2017). Practical black-box attacks against machine learning. arXiv:1602.02697.
[24] Russakovsky, O., Deng, J., and Su, H., et al. (2015). Imagenet large scale visual recognition challenge,. International journal of computer vision 115.3: pp.211-252.
[25] Shafique,M.,Naseer,M.,and Theocharides,T., etal.(2020). Robust machine learning systems: Challenges, current trends, perspectives, and the road ahead. IEEE Design and Test, Vol. 37, Issue: 2.
[26] Simonyan,K.and Zisserman,A. (2014). Very deep convolutional networks for large scale image recognition. arXiv:1409.1556.
[27] Tramèr, F., Zhang, F., and Juels, A., et al. (2016). Stealing machine learning models via prediction apis. arXiv:1609.02943.
[28] Zhang, K., Zuo, W., and Chen, Y., et al. (2016). Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising,. arXiv:1608.03981.
描述 碩士
國立政治大學
資訊科學系碩士在職專班
109971008
資料來源 http://thesis.lib.nccu.edu.tw/record/#G0109971008
資料類型 thesis
dc.contributor.advisor 胡毓忠zh_TW
dc.contributor.advisor Hu, Yuh-Jongen_US
dc.contributor.author (Authors) 陳彥宏zh_TW
dc.contributor.author (Authors) Chen, Yen-Hungen_US
dc.creator (作者) 陳彥宏zh_TW
dc.creator (作者) Chen, Yen-Hungen_US
dc.date (日期) 2022en_US
dc.date.accessioned 2-Sep-2022 15:47:23 (UTC+8)-
dc.date.available 2-Sep-2022 15:47:23 (UTC+8)-
dc.date.issued (上傳時間) 2-Sep-2022 15:47:23 (UTC+8)-
dc.identifier (Other Identifiers) G0109971008en_US
dc.identifier.uri (URI) http://nccur.lib.nccu.edu.tw/handle/140.119/141839-
dc.description (描述) 碩士zh_TW
dc.description (描述) 國立政治大學zh_TW
dc.description (描述) 資訊科學系碩士在職專班zh_TW
dc.description (描述) 109971008zh_TW
dc.description.abstract (摘要) 隨著硬體的進步,捲積神經網路 (CNN) 已經成功地被廣泛應用在 自動駕駛技術,用來偵測停止標或在路上的人們或車輛。根據這些偵 測的結果,車輛可以自動駕駛。但是,捲積神經網路的演算法卻有 缺陷,例如“停止”的標誌,加上一些干擾雜訊之後,可能就會被誤判 為“限速標誌”。這種行為稱之為“對抗式攻擊”。對抗式攻擊對於捲積 神經網路的應用產生了極大的風險。因此,對抗式防禦及增強捲積神 經網路的強韌性是兩個很具代表性的研究方向可以減低被攻擊的風 險,及增強人們對模型的信心。我們的論文中,提出一個方法來防止 對抗式攻擊。首先,在模型訓練階段,我們除了用原始的訓練資料去 訓練捲積神經網路,並且使用高斯濾波在原始訓練資料上,來產生新 的資料。尚加入這些新的訓練資料,可以強化捲積神經網路的強韌 性。在測試階段,我們在強化模型前面放置高斯濾波,將進來的資料 去雜訊,可以近一步強化模型的分類在面臨攻擊的準確度。zh_TW
dc.description.abstract (摘要) Convolutional Neural Network (CNN) has been successfully applied to the automobile industry because of hardware improvement. Auto-drive technology is used to detect stop signs, cars, or people on the road. According to the detection, the vehicle can be driven automatically. However, a “stop” sign can be changed to a “speed sign” when adding some noise. This action is called an “Adversarial Attack.” The adversarial attack makes an enormous risk on numerous applications. Hence, the adversarial defense has become an emerging topic of reducing the risk and increasing people’s confidence in the CNN model. In this study, we show a method to prevent the adversarial attack. We first train the original images in the training phase to enhance the CNN’s robustness. In addition, we add the Gaussian filtering images to enhance the training for the defense of the pictures. In the testing phase, we use a Gaussian filter to eliminate perturbations before feeding the image to the CNN model to increase its image classification accuracy.en_US
dc.description.tableofcontents 摘要.......................................... i Abstract........................................ ii Contents........................................ iii ListofFigures..................................... v
1 Introduction.................................... 1
2 BackgroundandRelatedWork .......................... 4
2.1 Adversarialattacks ............................. 4
2.2 AdversarialDefense............................. 6
3 Methodology ................................... 9
3.1 VGG-16Architecture and CIFAR-10dataset. . . . . . 10
3.2 GaussianFilter ............................... 10
3.3 ImagesDefinition.............................. 11
3.3.1 NormalImagesandAdversarialImages . . . . . . . 11
3.3.2 DenoiseImages........................... 12
3.4 GenerateAdversarialImages........................ 12
3.5 Gaussian Filter Defends Against Adversarial Noise .. 13
3.6 RobustVGG-16Model........................... 15
3.7 DefendAgainstAdversarialAttack..................... 16
4 Experiment .................................... 19
5 ConclusionsandFutureWorks .......................... 22
5.1 Conclusions................................. 22
5.2 FutureWorks ................................ 22 Bibliography ..................................... 24
zh_TW
dc.format.extent 997519 bytes-
dc.format.mimetype application/pdf-
dc.source.uri (資料來源) http://thesis.lib.nccu.edu.tw/record/#G0109971008en_US
dc.subject (關鍵詞) 對抗式攻擊zh_TW
dc.subject (關鍵詞) 穩健性zh_TW
dc.subject (關鍵詞) 高斯濾波zh_TW
dc.subject (關鍵詞) 去雜訊化zh_TW
dc.subject (關鍵詞) 影像分類zh_TW
dc.subject (關鍵詞) 卷積神經網路zh_TW
dc.subject (關鍵詞) Adversarial Attacksen_US
dc.subject (關鍵詞) Robustnessen_US
dc.subject (關鍵詞) Gaussian Filteren_US
dc.subject (關鍵詞) Denoiseen_US
dc.subject (關鍵詞) Image Classificationen_US
dc.subject (關鍵詞) Convolutional Neural Networken_US
dc.title (題名) 透過高斯濾波強化卷積神經網路來阻擋 FGMS 對抗式攻擊zh_TW
dc.title (題名) Robust Convolutional Neural Networks Through Gaussian Filter to Defend Against FGSM Adversarial Attacksen_US
dc.type (資料類型) thesisen_US
dc.relation.reference (參考文獻) [1] Behzadan, V. and Munir, A. (2017). Whatever does not kill deep reinforcement learning, makes it stronger. arXiv:1712.09344v1.
[2] Biggio, B. and Roli, F. (2018). Wild patterns: Ten years after the rise of adversarial machine learning. arXiv:1712.03141va2.
[3] Biggio,B.,Corona,I.,and Maiorca,D.,etal.(2017).Evasion attacks against machine learning at test time. arXiv:1708.06131.
[4] Biggio, B., Fumera, G., and Roli, F. (2014). Pattern recognition systems under attack: Design issues and research challenges. IJPRAI 28 (7).
[5] Biggio,B.,Nelson,B.,and Laskov,P. (2012). Poisoning attacks against support vector machines. in: 29th ICML.
[6] Carlini, N. and Wagner, D. (2017). Towards evaluating the robustness of neural networks. arXiv:1608.04644.
[7] Goodfellow, I. J., Shlens, J., and Szegedy, C. (2014). Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572.
[8] Gu, S. and Rigazio, L. (2014). Towards deep neural network architectures robust to adversarial examples. arXiv:1412.5068v4.
[9] Harder, P., Pfreundt, F.-J., and Keuper, M., et al. (2021). Spectral defense: Detecting adversarial attacks on cnns in the fourier domain. arXiv preprint arXiv:2103.03000.
[10] Ilahi, I., Usama, M., and Qadir, J., et al. (2020). Challenges and countermeasures for adversarial attacks on deep reinforcement learning. arXiv:2001.09684.
[11] Kos, J. and Song, D. (2017). Delving into adversarial attacks on deep policies,. arXiv:1705.06452.
[12] Krizhevsky, A. (2009). Learning multiple layers of features from tiny images. computer Science Department, University of Toronto, Tech. Rep.
[13] Krizhevsky,A., Sutskever,I., and Hinton,G.E (2017). Imagenet classification with deep convolutional neural networks,. Communications of the ACM 60.6: pp.84-90.
[14] Kurakin, A., Goodfellow, I., and Bengio, S. (2016). Adversarial examples in the physical world. arXiv:1607.02533.
[15] Lee,K.,Lee,K.,andLee,H.,etal.(2018).A simple unified framework for detecting out-of-distribution samples and adversarial attacks. arXiv:1807.03888.
[16] Li, B., Chen, C., and Wang, W., et al. (2019). Certified adversarial robustness with additive noise. arXiv:1809.03113v6.
[17] Li, Z., Feng, C., and Zheng, J., et al. (2020). Towards adversarial robustness via feature matching. IEEE.
[18] Lin,Y.-C.,Liu,M.-Y.,andSun,M.,etal.(2017). Detecting adversarial attacks on neural network policies with visual foresight. arXiv:1710.00814v1.
[19] Liu, A., Liu, X., and Zhang, C., et al. (2020). Training robust deep neural networks via adversarial noise propagation. arXiv:1909.09034v2.
[20] Ma, X., Li, B., and Wang, Y., et al. (2018). Characterizing adversarial subspaces using local intrinsic dimensionality. arXiv preprint arXiv:1801.02613.
[21] Madry, A., Makelov, A., and Schmidt, L., et al. (2019). Towards deep learning models resistant to adversarial attacks. arXiv:1706.06083.
[22] Muñoz-González,L.,Biggio,B.,and Demontis,A., etal.(2018).Towardspoisoning of deep learning algorithms with back-gradient optimization. in: AISec ’17, ACM, pp.27–38.
[23] Papernot, N., McDaniel, P., and Goodfellow, I., et al (2017). Practical black-box attacks against machine learning. arXiv:1602.02697.
[24] Russakovsky, O., Deng, J., and Su, H., et al. (2015). Imagenet large scale visual recognition challenge,. International journal of computer vision 115.3: pp.211-252.
[25] Shafique,M.,Naseer,M.,and Theocharides,T., etal.(2020). Robust machine learning systems: Challenges, current trends, perspectives, and the road ahead. IEEE Design and Test, Vol. 37, Issue: 2.
[26] Simonyan,K.and Zisserman,A. (2014). Very deep convolutional networks for large scale image recognition. arXiv:1409.1556.
[27] Tramèr, F., Zhang, F., and Juels, A., et al. (2016). Stealing machine learning models via prediction apis. arXiv:1609.02943.
[28] Zhang, K., Zuo, W., and Chen, Y., et al. (2016). Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising,. arXiv:1608.03981.
zh_TW
dc.identifier.doi (DOI) 10.6814/NCCU202201368en_US