Please use this identifier to cite or link to this item: https://ah.lib.nccu.edu.tw/handle/140.119/136967
DC FieldValueLanguage
dc.contributor.advisor廖文宏zh_TW
dc.contributor.advisorLiao, Wen-Hungen_US
dc.contributor.author曾鴻仁zh_TW
dc.contributor.authorZeng, Hong-Renen_US
dc.creator曾鴻仁zh_TW
dc.creatorZeng, Hong-Renen_US
dc.date2021en_US
dc.date.accessioned2021-09-02T08:56:07Z-
dc.date.available2021-09-02T08:56:07Z-
dc.date.issued2021-09-02T08:56:07Z-
dc.identifierG0108753148en_US
dc.identifier.urihttp://nccur.lib.nccu.edu.tw/handle/140.119/136967-
dc.description碩士zh_TW
dc.description國立政治大學zh_TW
dc.description資訊科學系zh_TW
dc.description108753148zh_TW
dc.description.abstract生成對抗網路的技術不斷精進,所產生的圖像人眼往往無法辨別是真實或合成,然而由於生成對抗網路在學習過程較難重建高頻資訊,導致在頻率域上可觀察到偽影,因此能被檢測模型輕易的辨識出來。同時也有研究指出頻率上的高頻分量,不利於生成對抗網路進行學習,因此如何在生成圖像時兼顧頻率域的學習效果,成為一大挑戰。\n本論文從頻率域的角度著手,除了驗證去除掉部分高頻上的雜訊,的確能夠更有效幫助生成對抗網路之學習,也提出了利用添加頻率損失的方式來改善訓練效果。經實驗發現利用離散傅立葉轉換或是離散小波轉換的損失,都能有效幫助生成對抗網路產生品質更好的圖像,在CelebA人臉資料集上,添加離散小波損失的生成圖FID最佳能達到6.53,比起SNGAN的FID為16.53進步許多,添加頻率損失的模型在訓練上也更加的穩定。另外本論文也使用通用的真偽分類模型進行測試,其改善後的模型所產生的圖片能讓辨識準確率有效降低,代表了經過改進後的模型生成的圖像更加逼真,證實了提供頻率的資訊給生成對抗網路的確有助於訓練流程,也提供後續對於生成對抗網路的研究有更多的參考方向。zh_TW
dc.description.abstractGenerative adversarial networks (GAN) have evolved rapidly since its introduction in 2014. The quality of synthesized images has improved significantly, making it difficult for human observer to tell the real and GAN-created ones apart. Due to GAN’s inability to faithfully reconstruct high frequency components of a signal, however, artifact can be observed using frequency domain representation, which can be easily detected using simple classification models. Researchers have also studied the adverse effects of high frequency components in the training process. It is a thus challenging task to synthesize visually realistic images while maintaining fidelity in the frequency domain.\nThis thesis attempts to enhance the quality of images generated using generative adversarial networks by incorporating frequency domain constraints. To begin with, we observe that the overall training process has become more stable by filtering out high-frequency noises. We then propose to include frequency domain losses in the generator and discriminator networks to investigate their effects on the generated images. Experimental results indicate that both discrete Fourier transform (DFT) and discrete wavelet transform (DWT) losses are effective in improving the quality of the generated images, and the training processes turn out to be more stable. We verify our results using a classification model designed to detect fake images. The accuracy is significantly reduced using images generated by our modified GAN mode, demonstrating the advantages of incorporating frequency domain constraints in generative adversarial networks.en_US
dc.description.tableofcontents第一章 緒論 1\n1.1 研究背景與動機 1\n1.2 研究目的與貢獻 3\n1.3 論文架構 4\n第二章 背景與相關研究 5\n2.1 生成對抗網路的架構與應用介紹 5\n2.1.1 AutoEncoder 5\n2.1.2 生成對抗網路架構 6\n2.1.3 生成對抗網路應用與相關研究 8\n2.2 生成對抗網路訓練成效評估 14\n2.2.1 分類模型介紹 14\n2.2.2 生成對抗模型量化標準 16\n2.3 生成影像的真偽檢測機制 18\n2.3.1 基於圖像特徵的檢測 19\n2.3.2 基於頻率域的檢測 20\n2.3.3 小結 23\n2.4 基於頻率域的生成圖像改善機制 23\n2.4.1 卷積神經網路的頻率特性 23\n2.4.2 生成圖像的頻率域特性 24\n2.4.3 基於頻率域的改善機制 27\n2.4.4 小結 29\n第三章 研究方法 30\n3.1 基本構想 30\n3.2 實驗前期驗證 30\n3.3 實驗設計 33\n3.4 實驗步驟 35\n3.4.1 傅立葉轉換損失 37\n3.4.2 小波轉換損失 38\n3.4.3 生成圖片辨識實驗 40\n第四章 實驗結果與分析 42\n4.1 添加頻率域損失實驗結果 42\n4.1.1 傅立葉轉換損失結果 42\n4.1.2 小波轉換損失結果 44\n4.2 綜合比較 47\n4.3 FFHQ資料集訓練結果 53\n4.4 PROGAN模型訓練結果 55\n4.5 生成圖像準確度實驗結果 56\n4.6 小結 58\n第五章 結論與未來工作 60\n5.1 研究結論 60\n5.2 未來展望 61\n參考文獻 62zh_TW
dc.format.extent3921010 bytes-
dc.format.mimetypeapplication/pdf-
dc.source.urihttp://thesis.lib.nccu.edu.tw/record/#G0108753148en_US
dc.subject生成對抗網路zh_TW
dc.subject離散傅立葉轉換zh_TW
dc.subject離散小波轉換zh_TW
dc.subject偽圖偵測zh_TW
dc.subjectGenerative adversarial networken_US
dc.subjectDiscrete Fourier transformen_US
dc.subjectDiscrete wavelet transformen_US
dc.subjectFake image detectionen_US
dc.title結合頻率域損失之生成對抗網路影像合成機制zh_TW
dc.titleImage Synthesis Using Generative Adversarial Network with Frequency Domain Constraintsen_US
dc.typethesisen_US
dc.relation.reference[1] Y.LeCun, K.Kavukcuoglu, andC.Farabet, “Convolutional networks and applications in vision,” in ISCAS 2010 - 2010 IEEE International Symposium on Circuits and Systems: Nano-Bio Circuit Fabrics and Systems, 2010, pp. 253–256, doi: 10.1109/ISCAS.2010.5537907.\n[2] Y.Lecun, Y.Bengio, andG.Hinton, “Deep learning,” Nature, vol. 521, no. 7553. Nature Publishing Group, pp. 436–444, May27, 2015, doi: 10.1038/nature14539.\n[3] I. J.Goodfellow et al., “Generative Adversarial Nets.” [Online]. Available: http://www.github.com/goodfeli/adversarial.\n[4] T.Karras, S.Laine, andT.Aila, “A Style-Based Generator Architecture for Generative Adversarial Networks,” Dec.2018, Accessed: Dec.13, 2020. [Online]. Available: https://arxiv.org/abs/1812.04948.\n[5] S.Lyu, “DEEPFAKE DETECTION: CURRENT CHALLENGES AND NEXT STEPS.” Accessed: Apr.23, 2021. [Online]. Available: https://deepfakedetectionchallenge.ai.\n[6] “Experts: Spy used AI-generated face to connect with targets.” https://apnews.com/article/professional-networking-ap-top-news-artificial-intelligence-social-platforms-think-tanks-bc2f19097a4c4fffaa00de6770b8a60d (accessed Apr. 23, 2021).\n[7] B.Dolhansky et al., “The DeepFake Detection Challenge (DFDC) Dataset,” Jun.2020, Accessed: Apr.14, 2021. [Online]. Available: http://arxiv.org/abs/2006.07397.\n[8] A.Rössler, D.Cozzolino, L.Verdoliva, C.Riess, J.Thies, andM.Nießner, “FaceForensics++: Learning to Detect Manipulated Facial Images.” Accessed: Apr.23, 2021. [Online]. Available: https://github.com/ondyari/FaceForensics.\n[9] “Building Autoencoders in Keras.” https://blog.keras.io/building-autoencoders-in-keras.html (accessed May 02, 2021).\n[10] “Overview of GAN Structure.” https://developers.google.com/machine-learning/gan.\n[11] A.Radford, L.Metz, andS.Chintala, “Unsupervised representation learning with deep convolutional generative adversarial networks,” Nov. 2016, Accessed: Dec.11, 2020. [Online]. Available: https://arxiv.org/abs/1511.06434v2.\n[12] M.Arjovsky, S.Chintala, andL.Bottou, “Wasserstein GAN,” arXiv, Jan.2017, Accessed: Apr.24, 2021. [Online]. Available: http://arxiv.org/abs/1701.07875.\n[13] T.Miyato, T.Kataoka, M.Koyama, andY.Yoshida, “Spectral Normalization for Generative Adversarial Networks,” arXiv, Feb.2018, Accessed: Apr.24, 2021. [Online]. Available: http://arxiv.org/abs/1802.05957.\n[14] T.Karras, T.Aila, S.Laine, andJ.Lehtinen, “Progressive Growing of GANs for Improved Quality, Stability, and Variation,” Oct.2017, Accessed: Dec.13, 2020. [Online]. Available: https://arxiv.org/abs/1710.10196.\n[15] X.Huang andS.Belongie, “Arbitrary Style Transfer in Real-time with Adaptive Instance Normalization,” Proc. IEEE Int. Conf. Comput. Vis., vol. 2017-October, pp. 1510–1519, Mar.2017, Accessed: Apr.23, 2021. [Online]. Available: http://arxiv.org/abs/1703.06868.\n[16] A.Brock, J.Donahue, andK.Simonyan, “Large Scale GAN Training for High Fidelity Natural Image Synthesis,” arXiv, Sep.2018, Accessed: Dec.13, 2020. [Online]. Available: http://arxiv.org/abs/1809.11096.\n[17] J.-Y.Zhu, T.Park, P.Isola, andA. A.Efros, “Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks,” Proc. IEEE Int. Conf. Comput. Vis., vol. 2017-October, pp. 2242–2251, Mar.2017, Accessed: Jan.01, 2021. [Online]. Available: http://arxiv.org/abs/1703.10593.\n[18] P.Singh, N.Komodakis, andN.Komodakisécole, “Cloud-GAN: Cloud Removal for Sentinel-2 Imagery Using a Cyclic Consistent Generative Adversarial Network CLOUD-GAN: CLOUD REMOVAL FOR SENTINEL-2 IMAGERY USING A CYCLIC CONSISTENT GENERATIVE ADVERSARIAL NETWORKS.” Accessed: Dec.21, 2020. [Online]. Available: https://hal-enpc.archives-ouvertes.fr/hal-01832797.\n[19] N. U.DIn, K.Javed, S.Bae, andJ.Yi, “Effective Removal of User-Selected Foreground Object from Facial Images Using a Novel GAN-Based Network,” IEEE Access, vol. 8, pp. 109648–109661, 2020, doi: 10.1109/ACCESS.2020.3001649.\n[20] C.Ledig et al., “Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network,” Proc. - 30th IEEE Conf. Comput. Vis. Pattern Recognition, CVPR 2017, vol. 2017-January, pp. 105–114, Sep.2016, Accessed: Apr.29, 2021. [Online]. Available: http://arxiv.org/abs/1609.04802.\n[21] J.Guo, S.Lu, H.Cai, W.Zhang, Y.Yu, andJ.Wang, “Long Text Generation via Adversarial Training with Leaked Information,” 32nd AAAI Conf. Artif. Intell. AAAI 2018, pp. 5141–5148, Sep.2017, Accessed: Apr.29, 2021. [Online]. Available: http://arxiv.org/abs/1709.08624.\n[22] H.-W.Dong, W.-Y.Hsiao, L.-C.Yang, andY.-H.Yang, “MuseGAN: Multi-track Sequential Generative Adversarial Networks for Symbolic Music Generation and Accompaniment,” 32nd AAAI Conf. Artif. Intell. AAAI 2018, pp. 34–41, Sep.2017, Accessed: Apr.29, 2021. [Online]. Available: http://arxiv.org/abs/1709.06298.\n[23] “Convolutional neural network - Wikipedia.” https://en.wikipedia.org/wiki/Convolutional_neural_network (accessed Jun. 02, 2021).\n[24] K.He, X.Zhang, S.Ren, andJ.Sun, “Deep Residual Learning for Image Recognition.” Accessed: Jun.02, 2021. [Online]. Available: http://image-net.org/challenges/LSVRC/2015/.\n[25] T.Salimans, I.Goodfellow, W.Zaremba, V.Cheung, A.Radford, andX.Chen, “Improved Techniques for Training GANs,” Adv. Neural Inf. Process. Syst., pp. 2234–2242, Jun.2016, Accessed: Apr.25, 2021. [Online]. Available: http://arxiv.org/abs/1606.03498.\n[26] C.Szegedy, V.Vanhoucke, S.Ioffe, J.Shlens, andZ.Wojna, “Rethinking the Inception Architecture for Computer Vision,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Dec. 2016, vol. 2016-December, pp. 2818–2826, doi: 10.1109/CVPR.2016.308.\n[27] M.Heusel, H.Ramsauer, T.Unterthiner, B.Nessler, andS.Hochreiter, “GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium,” Adv. Neural Inf. Process. Syst., vol. 2017-December, pp. 6627–6638, Jun.2017, Accessed: Apr.25, 2021. [Online]. Available: http://arxiv.org/abs/1706.08500.\n[28] H.Jeon, Y.Bang, J.Kim, andS. S.Woo, “T-GD: Transferable GAN-generated Images Detection Framework,” arXiv, Aug.2020, Accessed: Apr.19, 2021. [Online]. Available: http://arxiv.org/abs/2008.04115.\n[29] L.Nataraj et al., “Detecting GAN generated Fake Images using Co-occurrence Matrices,” arXiv, Mar.2019, Accessed: Nov.23, 2020. [Online]. Available: http://arxiv.org/abs/1903.06836.\n[30] C.-C.Hsu, C.-Y.Lee, andY.-X.Zhuang, “Learning to Detect Fake Face Images in the Wild,” Proc. - 2018 Int. Symp. Comput. Consum. Control. IS3C 2018, pp. 388–391, Sep.2018, Accessed: Apr.21, 2021. [Online]. Available: http://arxiv.org/abs/1809.08754.\n[31] Z.Liu, X.Qi, andP.Torr, “Global Texture Enhancement for Fake Face Detection in the Wild,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., pp. 8057–8066, Jan.2020, Accessed: Apr.21, 2021. [Online]. Available: http://arxiv.org/abs/2002.00133.\n[32] S.-Y.Wang, O.Wang, R.Zhang, A.Owens, andA. A.Efros, “CNN-Generated Images Are Surprisingly Easy to Spot… for Now,” pp. 8692–8701, 2020, doi: 10.1109/cvpr42600.2020.00872.\n[33] J.Frank, T.Eisenhofer, L.Schönherr, A.Fischer, D.Kolossa, andT.Holz, “Leveraging Frequency Analysis for Deep Fake Image Recognition,” no. Icml, 2020, [Online]. Available: http://arxiv.org/abs/2003.08685.\n[34] X.Zhang, S.Karaman, andS. F.Chang, “Detecting and Simulating Artifacts in GAN Fake Images,” 2019 IEEE Int. Work. Inf. Forensics Secur. WIFS 2019, 2019, doi: 10.1109/WIFS47025.2019.9035107.\n[35] H.Liu et al., “Spatial-Phase Shallow Learning: Rethinking Face Forgery Detection in Frequency Domain.”\n[36] Z.-Q. J.Xu, Y.Zhang, T.Luo, Y.Xiao, andZ.Ma, “Frequency Principle: Fourier Analysis Sheds Light on Deep Neural Networks,” Commun. Comput. Phys., vol. 28, no. 5, pp. 1746–1767, Jan.2019, doi: 10.4208/cicp.OA-2020-0085.\n[37] N.Rahaman et al., “On the Spectral Bias of Neural Networks,” 36th Int. Conf. Mach. Learn. ICML 2019, vol. 2019-June, pp. 9230–9239, Jun.2018, Accessed: Apr.18, 2021. [Online]. Available: http://arxiv.org/abs/1806.08734.\n[38] H.Wang, X.Wu, Z.Huang, andE. P.Xing, “High Frequency Component Helps Explain the Generalization of Convolutional Neural Networks,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., pp. 8681–8691, May2019, Accessed: Nov.24, 2020. [Online]. Available: http://arxiv.org/abs/1905.13545.\n[39] Y.Chen, G.Li, C.Jin, S.Liu, andT.Li, “SSD-GAN: Measuring the Realness in the Spatial and Spectral Domains,” 2021. Accessed: Jan.20, 2021. [Online]. Available: www.aaai.org.\n[40] V.Dumoulin andF.Visin, “A guide to convolution arithmetic for deep learning,” Mar.2016, Accessed: Apr.25, 2021. [Online]. Available: http://arxiv.org/abs/1603.07285.\n[41] Z.Li, P.Xia, X.Rui, Y.Hu, andB.Li, “Are High-Frequency Components Beneficial for Training of Generative Adversarial Networks,” Mar.2021, Accessed: Apr.25, 2021. [Online]. Available: http://arxiv.org/abs/2103.11093.\n[42] R.Durall, M.Keuper, andJ.Keuper, “Watch your Up-Convolution: CNN Based Generative Deep Neural Networks are Failing to Reproduce Spectral Distributions,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., pp. 7887–7896, Mar.2020, Accessed: Apr.25, 2021. [Online]. Available: http://arxiv.org/abs/2003.01826.\n[43] “Large-scale CelebFaces Attributes (CelebA) Dataset.” http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html (accessed Apr. 26, 2021).\n[44] K. S.Lee andC.Town, “Mimicry: Towards the Reproducibility of GAN Research,” arXiv, May2020, Accessed: Apr.26, 2021. [Online]. Available: http://arxiv.org/abs/2005.02494.\n[45] “GitHub - NVlabs/ffhq-dataset: Flickr-Faces-HQ Dataset (FFHQ).” https://github.com/NVlabs/ffhq-dataset (accessed Jul. 13, 2021).\n[46] C.-H.Hsia, J.-S.Chiang, andJ.-M.Guo, “Multiple Moving Objects Detection and Tracking Using Discrete Wavelet Transform,” Discret. Wavelet Transform. - Biomed. Appl., pp. 201–220, Sep.2011, doi: 10.5772/22325.\n[47] “GitHub - facebookresearch/pytorch_GAN_zoo: A mix of GAN implementations including progressive growing.” https://github.com/facebookresearch/pytorch_GAN_zoo (accessed Jul. 15, 2021).zh_TW
dc.identifier.doi10.6814/NCCU202101331en_US
item.openairecristypehttp://purl.org/coar/resource_type/c_46ec-
item.cerifentitytypePublications-
item.fulltextWith Fulltext-
item.grantfulltextrestricted-
item.openairetypethesis-
Appears in Collections:學位論文
Files in This Item:
File Description SizeFormat
314801.pdf3.83 MBAdobe PDF2View/Open
Show simple item record

Google ScholarTM

Check

Altmetric

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.