Please use this identifier to cite or link to this item:
https://ah.lib.nccu.edu.tw/handle/140.119/138325
題名: | Two Exposure Fusion Using Prior-Aware Generative Adversarial Network | 作者: | 彭彥璁 Peng, Yan-Tsung Yin, Jia-Li Chen, Bo-Hao |
貢獻者: | 資科系 | 關鍵詞: | High dynamic range image ; exposure fusion ; deep learning | 日期: | Jun-2021 | 上傳時間: | 23-Dec-2021 | 摘要: | Producing a high dynamic range (HDR) image from two low dynamic range (LDR) images with extreme exposures is challenging due to the lack of well-exposed contents. Existing works either use pixel fusion based on weighted quantization or conduct feature fusion using deep learning techniques. In contrast to these methods, our core idea is to progressively incorporate the pixel domain knowledge of LDR images into the feature fusion process. Specifically, we propose a novel Prior-Aware Generative Adversarial Network (PA-GAN), along with a new dual-level loss for two exposure fusion. The proposed PA-GAN is composed of a content prior guided encoder and a detail prior guided decoder, respectively in charge of content fusion and detail calibration. We further train the network using a dual-level loss that combines the semantic-level loss and pixel-level loss. Extensive qualitative and quantitative evaluations on diverse image datasets demonstrate that our proposed PA-GAN has superior performance than state-of-the-art methods. | 關聯: | IEEE Transactions on Multimedia, pp.1941-0077 | 資料類型: | article | DOI: | https://doi.org/10.1109/TMM.2021.3089324 |
Appears in Collections: | 期刊論文 |
Show full item record
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.