Publications-Theses

Article View/Open

Publication Export

Google ScholarTM

NCCU Library

Citation Infomation

Related Publications in TAIR

題名 基於深度學習框架之衛星圖像偽造偵測
Forgery Detection in Satellite Imagery Using Deep Neural Networks
作者 張以姍
Chang, Yi-Shan
貢獻者 廖文宏
Liao, Wen-Hung
張以姍
Chang, Yi-Shan
關鍵詞 衛星圖資
生成對抗網路
偽圖偵測
對抗例攻擊
Satellite imagery
Generative adversarial network
Image forgery detection
Adversarial attacks
日期 2022
上傳時間 2-Sep-2022 15:06:10 (UTC+8)
摘要 生成對抗網路(Generative Adversarial Network, GAN)的技術不斷在進步,所產生的圖像單靠人眼已無法準確的判斷出其真偽性。在衛星圖資的分析與處理,GAN可以做為資料擴增的手段,但也有一些別有用心的應用(如資訊戰),以GAN的手法造假圖片內容,混淆視聽,因此如何讓檢測模型分辨經過處理的衛星圖像之真偽性,是本論文的主要目標。
本論文從圖像處理的角度著手,將衛星圖像偽造偵測分成三個部分探討,第一部份為整張圖均為生成,研究三種不同衛星圖像(ProGAN、cGAN和CycleGAN)偽造方法,以及對圖像做強化處理是否會影響模型判別真偽圖的結果,並且添加對抗例攻擊能否有效讓模型偵測真假圖時混淆不清。第二部份是探討當衛星圖像中只有部份區域被經過修改時,模型是否能有效偵測被更動範圍。第三部份則探討不同的生成和偽造方式對於模型檢測效力的影響。
經實驗發現,不同GAN生成的衛星圖像可以很容易用機器學習的方式被辨別出來,但是當對圖像進行某些強化處理時,會讓檢測模型不容易分辨其真偽性,因此訓練了一個四分類的通用模型解決此問題。關於部份區域偽造實驗,發現測試在傳統相機拍出來的圖像上能夠正確偵測偽造區域,但是應用於衛星圖像時往往會失敗,推測可能是由於圖像類型的不同。關於衛星圖像切割成解析度小的圖像生成後再合併成原解析度之實驗,觀察到圖像在合併的地方會有很明顯的邊界存在,檢測模型也能很容易偵測出是合成的圖像,至於將真實圖像混合偽造圖片,由於偽造區塊只佔整張圖的一小部份,模型判斷此為真實的圖像。
統整本論文實驗結果,衛星圖資之真偽檢測,依其變造之手法有不同之難度,特別是部份區域偽造之偵測,仍是具挑戰之議題。
The technology of generative adversarial networks (GAN) is constantly evolving, and the synthesized images cannot be accurately distinguished by the human eyes alone. GAN has been applied to the analysis of satellite images, mostly for the purpose of data augmentation. Recently, however, we have seen a twist in its usage. In information warfare, GAN has been used to create fake satellite images or modify the image content by putting fake bridges, clouds to mislead or conceal important intelligence. To address the increasing counterfeit cases in satellite images, the goal of this thesis is to develop forgery detection algorithms that can classify fake images robustly and efficiently.
This thesis divides satellite image forgery detection into three parts. The first part deals with the case when the entire image is forged. Three satellite image synthesis methods, including ProGAN, cGAN and CycleGAN have been investigated. The effect of image pre-processing such as histogram equalization and bilateral filter has been evaluated. Adversarial attacks on the detection models have also been studied. The second part evaluates the effectiveness of the detection models when only certain parts of the satellite image have been modified. The third part discusses the impact of different generation and forgery methods on the detection model.
Experiments show that satellite images generated by different GANs can be easily identified by machine learning techniques. When enhancement is applied to the images, however, the detection model fails to distinguish its authenticity. A four-class classification model is proposed to address this issue. Regarding the experiments on partial forgery, it is found that methods which work for images taken by traditional cameras fail to perform satisfactorily on satellite images, possibly due to different image characteristics. Finally, when the input is partitioned into small areas and the individually GAN images are merged, discontinuities can be observed around the boundaries, making artifact the easy to detect. If we blend synthesized images with real images, the detection model shows difficulty when the size of the synthesized block is small.
The above experiments suggest that forgery detection in satellite images can be achieved when the whole image is synthesized. The task becomes more challenging if only part of the image is altered, an issue that deserves further investigation.
參考文獻 [1] Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., ... & Bengio, Y. (2014). Generative adversarial nets. Advances in neural information processing systems, 27.
[2] https://pansci.asia/archives/342421
[3] https://www.defenseone.com/technology/2019/03/next-phase-ai-deep-faking-whole-world-and-china-ahead/155944/
[4] https://medium.com/sentinel-hub/its-a-faaaake-or-not-bace4f0c01ec
[5] Zhao, B., Zhang, S., Xu, C., Sun, Y., & Deng, C. (2021). Deep fake geography? When geospatial data encounter Artificial Intelligence. Cartography and Geographic Information Science, 48(4), 338-352.
[6] Karras, T., Aila, T., Laine, S., & Lehtinen, J. (2017). Progressive growing of gans for improved quality, stability, and variation. arXiv preprint arXiv:1710.10196.
[7] https://blog.softwaremill.com/generative-adversarial-networks-in-satellite-image-datasets-augmentation-b7045d2f51ab
[8] Zhu, J. Y., Park, T., Isola, P., & Efros, A. A. (2017). Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE international conference on computer vision (pp. 2223-2232).
[9] Isola, P., Zhu, J. Y., Zhou, T., & Efros, A. A. (2017). Image-to-image translation with conditional adversarial networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1125-1134).
[10] Wang, S. Y., Wang, O., Zhang, R., Owens, A., & Efros, A. A. (2020). Cnn-generated images are surprisingly easy to spot... for now. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 8695-8704).
[11] https://zhuanlan.zhihu.com/p/56165819
[12] Liang, J., Niu, L., & Zhang, L. (2021, July). Inharmonious region localization. In 2021 IEEE International Conference on Multimedia and Expo (ICME) (pp. 1-6). IEEE.
[13] Wu, Y., AbdAlmageed, W., & Natarajan, P. (2019). Mantra-net: Manipulation tracing network for detection and localization of image forgeries with anomalous features. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 9543-9552).
[14] Zhang, X., Karaman, S., & Chang, S. F. (2019, December). Detecting and simulating artifacts in gan fake images. In 2019 IEEE International Workshop on Information Forensics and Security (WIFS) (pp. 1-6). IEEE.
[15] https://github.com/facebookresearch/pytorch_GAN_zoo
[16] https://github.com/softwaremill/sentinel-cgan
[17] https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix
[18] Goodfellow, I. J., Shlens, J., & Szegedy, C. (2014). Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572.
[19] Kurakin, A., Goodfellow, I., & Bengio, S. (2016). Adversarial examples in the physical world.
[20] Madry, A., Makelov, A., Schmidt, L., Tsipras, D., & Vladu, A. (2017). Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083.
描述 碩士
國立政治大學
資訊科學系
109753202
資料來源 http://thesis.lib.nccu.edu.tw/record/#G0109753202
資料類型 thesis
dc.contributor.advisor 廖文宏zh_TW
dc.contributor.advisor Liao, Wen-Hungen_US
dc.contributor.author (Authors) 張以姍zh_TW
dc.contributor.author (Authors) Chang, Yi-Shanen_US
dc.creator (作者) 張以姍zh_TW
dc.creator (作者) Chang, Yi-Shanen_US
dc.date (日期) 2022en_US
dc.date.accessioned 2-Sep-2022 15:06:10 (UTC+8)-
dc.date.available 2-Sep-2022 15:06:10 (UTC+8)-
dc.date.issued (上傳時間) 2-Sep-2022 15:06:10 (UTC+8)-
dc.identifier (Other Identifiers) G0109753202en_US
dc.identifier.uri (URI) http://nccur.lib.nccu.edu.tw/handle/140.119/141644-
dc.description (描述) 碩士zh_TW
dc.description (描述) 國立政治大學zh_TW
dc.description (描述) 資訊科學系zh_TW
dc.description (描述) 109753202zh_TW
dc.description.abstract (摘要) 生成對抗網路(Generative Adversarial Network, GAN)的技術不斷在進步,所產生的圖像單靠人眼已無法準確的判斷出其真偽性。在衛星圖資的分析與處理,GAN可以做為資料擴增的手段,但也有一些別有用心的應用(如資訊戰),以GAN的手法造假圖片內容,混淆視聽,因此如何讓檢測模型分辨經過處理的衛星圖像之真偽性,是本論文的主要目標。
本論文從圖像處理的角度著手,將衛星圖像偽造偵測分成三個部分探討,第一部份為整張圖均為生成,研究三種不同衛星圖像(ProGAN、cGAN和CycleGAN)偽造方法,以及對圖像做強化處理是否會影響模型判別真偽圖的結果,並且添加對抗例攻擊能否有效讓模型偵測真假圖時混淆不清。第二部份是探討當衛星圖像中只有部份區域被經過修改時,模型是否能有效偵測被更動範圍。第三部份則探討不同的生成和偽造方式對於模型檢測效力的影響。
經實驗發現,不同GAN生成的衛星圖像可以很容易用機器學習的方式被辨別出來,但是當對圖像進行某些強化處理時,會讓檢測模型不容易分辨其真偽性,因此訓練了一個四分類的通用模型解決此問題。關於部份區域偽造實驗,發現測試在傳統相機拍出來的圖像上能夠正確偵測偽造區域,但是應用於衛星圖像時往往會失敗,推測可能是由於圖像類型的不同。關於衛星圖像切割成解析度小的圖像生成後再合併成原解析度之實驗,觀察到圖像在合併的地方會有很明顯的邊界存在,檢測模型也能很容易偵測出是合成的圖像,至於將真實圖像混合偽造圖片,由於偽造區塊只佔整張圖的一小部份,模型判斷此為真實的圖像。
統整本論文實驗結果,衛星圖資之真偽檢測,依其變造之手法有不同之難度,特別是部份區域偽造之偵測,仍是具挑戰之議題。
zh_TW
dc.description.abstract (摘要) The technology of generative adversarial networks (GAN) is constantly evolving, and the synthesized images cannot be accurately distinguished by the human eyes alone. GAN has been applied to the analysis of satellite images, mostly for the purpose of data augmentation. Recently, however, we have seen a twist in its usage. In information warfare, GAN has been used to create fake satellite images or modify the image content by putting fake bridges, clouds to mislead or conceal important intelligence. To address the increasing counterfeit cases in satellite images, the goal of this thesis is to develop forgery detection algorithms that can classify fake images robustly and efficiently.
This thesis divides satellite image forgery detection into three parts. The first part deals with the case when the entire image is forged. Three satellite image synthesis methods, including ProGAN, cGAN and CycleGAN have been investigated. The effect of image pre-processing such as histogram equalization and bilateral filter has been evaluated. Adversarial attacks on the detection models have also been studied. The second part evaluates the effectiveness of the detection models when only certain parts of the satellite image have been modified. The third part discusses the impact of different generation and forgery methods on the detection model.
Experiments show that satellite images generated by different GANs can be easily identified by machine learning techniques. When enhancement is applied to the images, however, the detection model fails to distinguish its authenticity. A four-class classification model is proposed to address this issue. Regarding the experiments on partial forgery, it is found that methods which work for images taken by traditional cameras fail to perform satisfactorily on satellite images, possibly due to different image characteristics. Finally, when the input is partitioned into small areas and the individually GAN images are merged, discontinuities can be observed around the boundaries, making artifact the easy to detect. If we blend synthesized images with real images, the detection model shows difficulty when the size of the synthesized block is small.
The above experiments suggest that forgery detection in satellite images can be achieved when the whole image is synthesized. The task becomes more challenging if only part of the image is altered, an issue that deserves further investigation.
en_US
dc.description.tableofcontents 致謝 i
摘要 ii
Abstract iii
目錄 v
表目錄 vii
圖目錄 ix
第一章 緒論 1
1.1研究背景與動機 1
1.2 研究目的與貢獻 4
1.3論文架構 7
第二章 背景與相關研究 8
2.1 生成對抗網路架構 8
2.1.1 ProGAN 8
2.1.2 cGAN 9
2.1.3 CycleGAN 11
2.1.3 Pix2Pix 11
2.1.4 小結 12
2.2 生成圖像的真偽檢測 12
2.2.1預訓練檢測模型 12
2.2.2小結 14
2.3 部分區域偽造之偵測 14
2.3.1 DIRL 14
2.3.2 MantraNet 16
2.3.3小結 17
第三章 資料集介紹與生成對抗網路模型 18
3.1 全圖合成資料集介紹 18
3.1.1 衛星圖像ProGAN (No reference) 18
3.1.2 衛星圖像 cGAN (Partial reference) 19
3.1.3 衛星圖像 CycleGAN (Full reference) 21
3.1.4 衛星圖像 Pix2Pix (Full reference) 22
3.2 部分區域偽造之資料集介紹 23
第四章 研究方法與實驗設計 29
4.1 全圖合成實驗設計 29
4.2部分區域偽造實驗設計 32
4.3其他偽造實驗設計 33
第五章 實驗結果與分析 35
5.1 全圖偽造之實驗結果 35
5.1.1個別分類模型 35
5.1.2 通用分類模型 47
5.2 部分區域偽造實驗結果 60
5.2.1 DIRL模型偵測結果 61
5.2.2 ManTraNet模型偵測結果 67
5.3 其他偽造方式實驗結果 70
5.3.1 衛星切割生成後合併 70
5.3.2 真圖左上放偽造圖 71
5.3.3 模型測試結果 72
5-4 小結 75
第六章 結論與未來工作 77
6.1 研究結論 77
6.2 未來展望 78
參考文獻 79
附錄 82
附錄A、圖像四分類強化處理(三個GAN模型各自訓練) 82
附錄B、圖像四分類添加對抗例攻擊(三個GAN模型各自訓練) 86
附錄C、通用模型圖像二分類:強化處理結果 90
附錄D、DIRL Fine-Tune模型測試結果 92
zh_TW
dc.format.extent 8650719 bytes-
dc.format.mimetype application/pdf-
dc.source.uri (資料來源) http://thesis.lib.nccu.edu.tw/record/#G0109753202en_US
dc.subject (關鍵詞) 衛星圖資zh_TW
dc.subject (關鍵詞) 生成對抗網路zh_TW
dc.subject (關鍵詞) 偽圖偵測zh_TW
dc.subject (關鍵詞) 對抗例攻擊zh_TW
dc.subject (關鍵詞) Satellite imageryen_US
dc.subject (關鍵詞) Generative adversarial networken_US
dc.subject (關鍵詞) Image forgery detectionen_US
dc.subject (關鍵詞) Adversarial attacksen_US
dc.title (題名) 基於深度學習框架之衛星圖像偽造偵測zh_TW
dc.title (題名) Forgery Detection in Satellite Imagery Using Deep Neural Networksen_US
dc.type (資料類型) thesisen_US
dc.relation.reference (參考文獻) [1] Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., ... & Bengio, Y. (2014). Generative adversarial nets. Advances in neural information processing systems, 27.
[2] https://pansci.asia/archives/342421
[3] https://www.defenseone.com/technology/2019/03/next-phase-ai-deep-faking-whole-world-and-china-ahead/155944/
[4] https://medium.com/sentinel-hub/its-a-faaaake-or-not-bace4f0c01ec
[5] Zhao, B., Zhang, S., Xu, C., Sun, Y., & Deng, C. (2021). Deep fake geography? When geospatial data encounter Artificial Intelligence. Cartography and Geographic Information Science, 48(4), 338-352.
[6] Karras, T., Aila, T., Laine, S., & Lehtinen, J. (2017). Progressive growing of gans for improved quality, stability, and variation. arXiv preprint arXiv:1710.10196.
[7] https://blog.softwaremill.com/generative-adversarial-networks-in-satellite-image-datasets-augmentation-b7045d2f51ab
[8] Zhu, J. Y., Park, T., Isola, P., & Efros, A. A. (2017). Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE international conference on computer vision (pp. 2223-2232).
[9] Isola, P., Zhu, J. Y., Zhou, T., & Efros, A. A. (2017). Image-to-image translation with conditional adversarial networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1125-1134).
[10] Wang, S. Y., Wang, O., Zhang, R., Owens, A., & Efros, A. A. (2020). Cnn-generated images are surprisingly easy to spot... for now. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 8695-8704).
[11] https://zhuanlan.zhihu.com/p/56165819
[12] Liang, J., Niu, L., & Zhang, L. (2021, July). Inharmonious region localization. In 2021 IEEE International Conference on Multimedia and Expo (ICME) (pp. 1-6). IEEE.
[13] Wu, Y., AbdAlmageed, W., & Natarajan, P. (2019). Mantra-net: Manipulation tracing network for detection and localization of image forgeries with anomalous features. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 9543-9552).
[14] Zhang, X., Karaman, S., & Chang, S. F. (2019, December). Detecting and simulating artifacts in gan fake images. In 2019 IEEE International Workshop on Information Forensics and Security (WIFS) (pp. 1-6). IEEE.
[15] https://github.com/facebookresearch/pytorch_GAN_zoo
[16] https://github.com/softwaremill/sentinel-cgan
[17] https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix
[18] Goodfellow, I. J., Shlens, J., & Szegedy, C. (2014). Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572.
[19] Kurakin, A., Goodfellow, I., & Bengio, S. (2016). Adversarial examples in the physical world.
[20] Madry, A., Makelov, A., Schmidt, L., Tsipras, D., & Vladu, A. (2017). Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083.
zh_TW
dc.identifier.doi (DOI) 10.6814/NCCU202201363en_US