學術產出-Periodical Articles

Article View/Open

Publication Export

Google ScholarTM

政大圖書館

Citation Infomation

題名 Two Exposure Fusion Using Prior-Aware Generative Adversarial Network
作者 彭彥璁
Peng, Yan-Tsung
Yin, Jia-Li
Chen, Bo-Hao
貢獻者 資科系
關鍵詞 High dynamic range image ; exposure fusion ; deep learning
日期 2021-06
上傳時間 23-Dec-2021 15:41:23 (UTC+8)
摘要 Producing a high dynamic range (HDR) image from two low dynamic range (LDR) images with extreme exposures is challenging due to the lack of well-exposed contents. Existing works either use pixel fusion based on weighted quantization or conduct feature fusion using deep learning techniques. In contrast to these methods, our core idea is to progressively incorporate the pixel domain knowledge of LDR images into the feature fusion process. Specifically, we propose a novel Prior-Aware Generative Adversarial Network (PA-GAN), along with a new dual-level loss for two exposure fusion. The proposed PA-GAN is composed of a content prior guided encoder and a detail prior guided decoder, respectively in charge of content fusion and detail calibration. We further train the network using a dual-level loss that combines the semantic-level loss and pixel-level loss. Extensive qualitative and quantitative evaluations on diverse image datasets demonstrate that our proposed PA-GAN has superior performance than state-of-the-art methods.
關聯 IEEE Transactions on Multimedia, pp.1941-0077
資料類型 article
DOI https://doi.org/10.1109/TMM.2021.3089324
dc.contributor 資科系-
dc.creator (作者) 彭彥璁-
dc.creator (作者) Peng, Yan-Tsung-
dc.creator (作者) Yin, Jia-Li-
dc.creator (作者) Chen, Bo-Hao-
dc.date (日期) 2021-06-
dc.date.accessioned 23-Dec-2021 15:41:23 (UTC+8)-
dc.date.available 23-Dec-2021 15:41:23 (UTC+8)-
dc.date.issued (上傳時間) 23-Dec-2021 15:41:23 (UTC+8)-
dc.identifier.uri (URI) http://nccur.lib.nccu.edu.tw/handle/140.119/138325-
dc.description.abstract (摘要) Producing a high dynamic range (HDR) image from two low dynamic range (LDR) images with extreme exposures is challenging due to the lack of well-exposed contents. Existing works either use pixel fusion based on weighted quantization or conduct feature fusion using deep learning techniques. In contrast to these methods, our core idea is to progressively incorporate the pixel domain knowledge of LDR images into the feature fusion process. Specifically, we propose a novel Prior-Aware Generative Adversarial Network (PA-GAN), along with a new dual-level loss for two exposure fusion. The proposed PA-GAN is composed of a content prior guided encoder and a detail prior guided decoder, respectively in charge of content fusion and detail calibration. We further train the network using a dual-level loss that combines the semantic-level loss and pixel-level loss. Extensive qualitative and quantitative evaluations on diverse image datasets demonstrate that our proposed PA-GAN has superior performance than state-of-the-art methods.-
dc.format.extent 17042236 bytes-
dc.format.mimetype application/pdf-
dc.relation (關聯) IEEE Transactions on Multimedia, pp.1941-0077-
dc.subject (關鍵詞) High dynamic range image ; exposure fusion ; deep learning-
dc.title (題名) Two Exposure Fusion Using Prior-Aware Generative Adversarial Network-
dc.type (資料類型) article-
dc.identifier.doi (DOI) 10.1109/TMM.2021.3089324-
dc.doi.uri (DOI) https://doi.org/10.1109/TMM.2021.3089324-