Please use this identifier to cite or link to this item: https://ah.lib.nccu.edu.tw/handle/140.119/136967
題名: 結合頻率域損失之生成對抗網路影像合成機制
Image Synthesis Using Generative Adversarial Network with Frequency Domain Constraints
作者: 曾鴻仁
Zeng, Hong-Ren
貢獻者: 廖文宏
Liao, Wen-Hung
曾鴻仁
Zeng, Hong-Ren
關鍵詞: 生成對抗網路
離散傅立葉轉換
離散小波轉換
偽圖偵測
Generative adversarial network
Discrete Fourier transform
Discrete wavelet transform
Fake image detection
日期: 2021
上傳時間: 2-Sep-2021
摘要: 生成對抗網路的技術不斷精進,所產生的圖像人眼往往無法辨別是真實或合成,然而由於生成對抗網路在學習過程較難重建高頻資訊,導致在頻率域上可觀察到偽影,因此能被檢測模型輕易的辨識出來。同時也有研究指出頻率上的高頻分量,不利於生成對抗網路進行學習,因此如何在生成圖像時兼顧頻率域的學習效果,成為一大挑戰。\n本論文從頻率域的角度著手,除了驗證去除掉部分高頻上的雜訊,的確能夠更有效幫助生成對抗網路之學習,也提出了利用添加頻率損失的方式來改善訓練效果。經實驗發現利用離散傅立葉轉換或是離散小波轉換的損失,都能有效幫助生成對抗網路產生品質更好的圖像,在CelebA人臉資料集上,添加離散小波損失的生成圖FID最佳能達到6.53,比起SNGAN的FID為16.53進步許多,添加頻率損失的模型在訓練上也更加的穩定。另外本論文也使用通用的真偽分類模型進行測試,其改善後的模型所產生的圖片能讓辨識準確率有效降低,代表了經過改進後的模型生成的圖像更加逼真,證實了提供頻率的資訊給生成對抗網路的確有助於訓練流程,也提供後續對於生成對抗網路的研究有更多的參考方向。
Generative adversarial networks (GAN) have evolved rapidly since its introduction in 2014. The quality of synthesized images has improved significantly, making it difficult for human observer to tell the real and GAN-created ones apart. Due to GAN’s inability to faithfully reconstruct high frequency components of a signal, however, artifact can be observed using frequency domain representation, which can be easily detected using simple classification models. Researchers have also studied the adverse effects of high frequency components in the training process. It is a thus challenging task to synthesize visually realistic images while maintaining fidelity in the frequency domain.\nThis thesis attempts to enhance the quality of images generated using generative adversarial networks by incorporating frequency domain constraints. To begin with, we observe that the overall training process has become more stable by filtering out high-frequency noises. We then propose to include frequency domain losses in the generator and discriminator networks to investigate their effects on the generated images. Experimental results indicate that both discrete Fourier transform (DFT) and discrete wavelet transform (DWT) losses are effective in improving the quality of the generated images, and the training processes turn out to be more stable. We verify our results using a classification model designed to detect fake images. The accuracy is significantly reduced using images generated by our modified GAN mode, demonstrating the advantages of incorporating frequency domain constraints in generative adversarial networks.
參考文獻: [1] Y.LeCun, K.Kavukcuoglu, andC.Farabet, “Convolutional networks and applications in vision,” in ISCAS 2010 - 2010 IEEE International Symposium on Circuits and Systems: Nano-Bio Circuit Fabrics and Systems, 2010, pp. 253–256, doi: 10.1109/ISCAS.2010.5537907.\n[2] Y.Lecun, Y.Bengio, andG.Hinton, “Deep learning,” Nature, vol. 521, no. 7553. Nature Publishing Group, pp. 436–444, May27, 2015, doi: 10.1038/nature14539.\n[3] I. J.Goodfellow et al., “Generative Adversarial Nets.” [Online]. Available: http://www.github.com/goodfeli/adversarial.\n[4] T.Karras, S.Laine, andT.Aila, “A Style-Based Generator Architecture for Generative Adversarial Networks,” Dec.2018, Accessed: Dec.13, 2020. [Online]. Available: https://arxiv.org/abs/1812.04948.\n[5] S.Lyu, “DEEPFAKE DETECTION: CURRENT CHALLENGES AND NEXT STEPS.” Accessed: Apr.23, 2021. [Online]. Available: https://deepfakedetectionchallenge.ai.\n[6] “Experts: Spy used AI-generated face to connect with targets.” https://apnews.com/article/professional-networking-ap-top-news-artificial-intelligence-social-platforms-think-tanks-bc2f19097a4c4fffaa00de6770b8a60d (accessed Apr. 23, 2021).\n[7] B.Dolhansky et al., “The DeepFake Detection Challenge (DFDC) Dataset,” Jun.2020, Accessed: Apr.14, 2021. [Online]. Available: http://arxiv.org/abs/2006.07397.\n[8] A.Rössler, D.Cozzolino, L.Verdoliva, C.Riess, J.Thies, andM.Nießner, “FaceForensics++: Learning to Detect Manipulated Facial Images.” Accessed: Apr.23, 2021. [Online]. Available: https://github.com/ondyari/FaceForensics.\n[9] “Building Autoencoders in Keras.” https://blog.keras.io/building-autoencoders-in-keras.html (accessed May 02, 2021).\n[10] “Overview of GAN Structure.” https://developers.google.com/machine-learning/gan.\n[11] A.Radford, L.Metz, andS.Chintala, “Unsupervised representation learning with deep convolutional generative adversarial networks,” Nov. 2016, Accessed: Dec.11, 2020. [Online]. Available: https://arxiv.org/abs/1511.06434v2.\n[12] M.Arjovsky, S.Chintala, andL.Bottou, “Wasserstein GAN,” arXiv, Jan.2017, Accessed: Apr.24, 2021. [Online]. Available: http://arxiv.org/abs/1701.07875.\n[13] T.Miyato, T.Kataoka, M.Koyama, andY.Yoshida, “Spectral Normalization for Generative Adversarial Networks,” arXiv, Feb.2018, Accessed: Apr.24, 2021. [Online]. Available: http://arxiv.org/abs/1802.05957.\n[14] T.Karras, T.Aila, S.Laine, andJ.Lehtinen, “Progressive Growing of GANs for Improved Quality, Stability, and Variation,” Oct.2017, Accessed: Dec.13, 2020. [Online]. Available: https://arxiv.org/abs/1710.10196.\n[15] X.Huang andS.Belongie, “Arbitrary Style Transfer in Real-time with Adaptive Instance Normalization,” Proc. IEEE Int. Conf. Comput. Vis., vol. 2017-October, pp. 1510–1519, Mar.2017, Accessed: Apr.23, 2021. [Online]. Available: http://arxiv.org/abs/1703.06868.\n[16] A.Brock, J.Donahue, andK.Simonyan, “Large Scale GAN Training for High Fidelity Natural Image Synthesis,” arXiv, Sep.2018, Accessed: Dec.13, 2020. [Online]. Available: http://arxiv.org/abs/1809.11096.\n[17] J.-Y.Zhu, T.Park, P.Isola, andA. A.Efros, “Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks,” Proc. IEEE Int. Conf. Comput. Vis., vol. 2017-October, pp. 2242–2251, Mar.2017, Accessed: Jan.01, 2021. [Online]. Available: http://arxiv.org/abs/1703.10593.\n[18] P.Singh, N.Komodakis, andN.Komodakisécole, “Cloud-GAN: Cloud Removal for Sentinel-2 Imagery Using a Cyclic Consistent Generative Adversarial Network CLOUD-GAN: CLOUD REMOVAL FOR SENTINEL-2 IMAGERY USING A CYCLIC CONSISTENT GENERATIVE ADVERSARIAL NETWORKS.” Accessed: Dec.21, 2020. [Online]. Available: https://hal-enpc.archives-ouvertes.fr/hal-01832797.\n[19] N. U.DIn, K.Javed, S.Bae, andJ.Yi, “Effective Removal of User-Selected Foreground Object from Facial Images Using a Novel GAN-Based Network,” IEEE Access, vol. 8, pp. 109648–109661, 2020, doi: 10.1109/ACCESS.2020.3001649.\n[20] C.Ledig et al., “Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network,” Proc. - 30th IEEE Conf. Comput. Vis. Pattern Recognition, CVPR 2017, vol. 2017-January, pp. 105–114, Sep.2016, Accessed: Apr.29, 2021. [Online]. Available: http://arxiv.org/abs/1609.04802.\n[21] J.Guo, S.Lu, H.Cai, W.Zhang, Y.Yu, andJ.Wang, “Long Text Generation via Adversarial Training with Leaked Information,” 32nd AAAI Conf. Artif. Intell. AAAI 2018, pp. 5141–5148, Sep.2017, Accessed: Apr.29, 2021. [Online]. Available: http://arxiv.org/abs/1709.08624.\n[22] H.-W.Dong, W.-Y.Hsiao, L.-C.Yang, andY.-H.Yang, “MuseGAN: Multi-track Sequential Generative Adversarial Networks for Symbolic Music Generation and Accompaniment,” 32nd AAAI Conf. Artif. Intell. AAAI 2018, pp. 34–41, Sep.2017, Accessed: Apr.29, 2021. [Online]. Available: http://arxiv.org/abs/1709.06298.\n[23] “Convolutional neural network - Wikipedia.” https://en.wikipedia.org/wiki/Convolutional_neural_network (accessed Jun. 02, 2021).\n[24] K.He, X.Zhang, S.Ren, andJ.Sun, “Deep Residual Learning for Image Recognition.” Accessed: Jun.02, 2021. [Online]. Available: http://image-net.org/challenges/LSVRC/2015/.\n[25] T.Salimans, I.Goodfellow, W.Zaremba, V.Cheung, A.Radford, andX.Chen, “Improved Techniques for Training GANs,” Adv. Neural Inf. Process. Syst., pp. 2234–2242, Jun.2016, Accessed: Apr.25, 2021. [Online]. Available: http://arxiv.org/abs/1606.03498.\n[26] C.Szegedy, V.Vanhoucke, S.Ioffe, J.Shlens, andZ.Wojna, “Rethinking the Inception Architecture for Computer Vision,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Dec. 2016, vol. 2016-December, pp. 2818–2826, doi: 10.1109/CVPR.2016.308.\n[27] M.Heusel, H.Ramsauer, T.Unterthiner, B.Nessler, andS.Hochreiter, “GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium,” Adv. Neural Inf. Process. Syst., vol. 2017-December, pp. 6627–6638, Jun.2017, Accessed: Apr.25, 2021. [Online]. Available: http://arxiv.org/abs/1706.08500.\n[28] H.Jeon, Y.Bang, J.Kim, andS. S.Woo, “T-GD: Transferable GAN-generated Images Detection Framework,” arXiv, Aug.2020, Accessed: Apr.19, 2021. [Online]. Available: http://arxiv.org/abs/2008.04115.\n[29] L.Nataraj et al., “Detecting GAN generated Fake Images using Co-occurrence Matrices,” arXiv, Mar.2019, Accessed: Nov.23, 2020. [Online]. Available: http://arxiv.org/abs/1903.06836.\n[30] C.-C.Hsu, C.-Y.Lee, andY.-X.Zhuang, “Learning to Detect Fake Face Images in the Wild,” Proc. - 2018 Int. Symp. Comput. Consum. Control. IS3C 2018, pp. 388–391, Sep.2018, Accessed: Apr.21, 2021. [Online]. Available: http://arxiv.org/abs/1809.08754.\n[31] Z.Liu, X.Qi, andP.Torr, “Global Texture Enhancement for Fake Face Detection in the Wild,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., pp. 8057–8066, Jan.2020, Accessed: Apr.21, 2021. [Online]. Available: http://arxiv.org/abs/2002.00133.\n[32] S.-Y.Wang, O.Wang, R.Zhang, A.Owens, andA. A.Efros, “CNN-Generated Images Are Surprisingly Easy to Spot… for Now,” pp. 8692–8701, 2020, doi: 10.1109/cvpr42600.2020.00872.\n[33] J.Frank, T.Eisenhofer, L.Schönherr, A.Fischer, D.Kolossa, andT.Holz, “Leveraging Frequency Analysis for Deep Fake Image Recognition,” no. Icml, 2020, [Online]. Available: http://arxiv.org/abs/2003.08685.\n[34] X.Zhang, S.Karaman, andS. F.Chang, “Detecting and Simulating Artifacts in GAN Fake Images,” 2019 IEEE Int. Work. Inf. Forensics Secur. WIFS 2019, 2019, doi: 10.1109/WIFS47025.2019.9035107.\n[35] H.Liu et al., “Spatial-Phase Shallow Learning: Rethinking Face Forgery Detection in Frequency Domain.”\n[36] Z.-Q. J.Xu, Y.Zhang, T.Luo, Y.Xiao, andZ.Ma, “Frequency Principle: Fourier Analysis Sheds Light on Deep Neural Networks,” Commun. Comput. Phys., vol. 28, no. 5, pp. 1746–1767, Jan.2019, doi: 10.4208/cicp.OA-2020-0085.\n[37] N.Rahaman et al., “On the Spectral Bias of Neural Networks,” 36th Int. Conf. Mach. Learn. ICML 2019, vol. 2019-June, pp. 9230–9239, Jun.2018, Accessed: Apr.18, 2021. [Online]. Available: http://arxiv.org/abs/1806.08734.\n[38] H.Wang, X.Wu, Z.Huang, andE. P.Xing, “High Frequency Component Helps Explain the Generalization of Convolutional Neural Networks,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., pp. 8681–8691, May2019, Accessed: Nov.24, 2020. [Online]. Available: http://arxiv.org/abs/1905.13545.\n[39] Y.Chen, G.Li, C.Jin, S.Liu, andT.Li, “SSD-GAN: Measuring the Realness in the Spatial and Spectral Domains,” 2021. Accessed: Jan.20, 2021. [Online]. Available: www.aaai.org.\n[40] V.Dumoulin andF.Visin, “A guide to convolution arithmetic for deep learning,” Mar.2016, Accessed: Apr.25, 2021. [Online]. Available: http://arxiv.org/abs/1603.07285.\n[41] Z.Li, P.Xia, X.Rui, Y.Hu, andB.Li, “Are High-Frequency Components Beneficial for Training of Generative Adversarial Networks,” Mar.2021, Accessed: Apr.25, 2021. [Online]. Available: http://arxiv.org/abs/2103.11093.\n[42] R.Durall, M.Keuper, andJ.Keuper, “Watch your Up-Convolution: CNN Based Generative Deep Neural Networks are Failing to Reproduce Spectral Distributions,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., pp. 7887–7896, Mar.2020, Accessed: Apr.25, 2021. [Online]. Available: http://arxiv.org/abs/2003.01826.\n[43] “Large-scale CelebFaces Attributes (CelebA) Dataset.” http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html (accessed Apr. 26, 2021).\n[44] K. S.Lee andC.Town, “Mimicry: Towards the Reproducibility of GAN Research,” arXiv, May2020, Accessed: Apr.26, 2021. [Online]. Available: http://arxiv.org/abs/2005.02494.\n[45] “GitHub - NVlabs/ffhq-dataset: Flickr-Faces-HQ Dataset (FFHQ).” https://github.com/NVlabs/ffhq-dataset (accessed Jul. 13, 2021).\n[46] C.-H.Hsia, J.-S.Chiang, andJ.-M.Guo, “Multiple Moving Objects Detection and Tracking Using Discrete Wavelet Transform,” Discret. Wavelet Transform. - Biomed. Appl., pp. 201–220, Sep.2011, doi: 10.5772/22325.\n[47] “GitHub - facebookresearch/pytorch_GAN_zoo: A mix of GAN implementations including progressive growing.” https://github.com/facebookresearch/pytorch_GAN_zoo (accessed Jul. 15, 2021).
描述: 碩士
國立政治大學
資訊科學系
108753148
資料來源: http://thesis.lib.nccu.edu.tw/record/#G0108753148
資料類型: thesis
Appears in Collections:學位論文

Files in This Item:
File Description SizeFormat
314801.pdf3.83 MBAdobe PDF2View/Open
Show full item record

Google ScholarTM

Check

Altmetric

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.