學術產出-期刊論文

文章檢視/開啟

書目匯出

Google ScholarTM

政大圖書館

引文資訊

TAIR相關學術產出

題名 Automatic Intermediate Generation With Deep Reinforcement Learning for Robust Two-Exposure Image Fusion
作者 彭彥璁
Peng, Yan-Tsung
Yin, Jia-Li
Chen, Bo-Hao
Hwang, Hau
貢獻者 資科系
關鍵詞 High dynamic range (HDR) image ; image fusion ; reinforcement learning
日期 2021-06
上傳時間 23-十二月-2021 15:40:49 (UTC+8)
摘要 Fusing low dynamic range (LDR) for high dynamic range (HDR) images has gained a lot of attention, especially to achieve real-world application significance when the hardware resources are limited to capture images with different exposure times. However, existing HDR image generation by picking the best parts from each LDR image often yields unsatisfactory results due to either the lack of input images or well-exposed contents. To overcome this limitation, we model the HDR image generation process in two-exposure fusion as a deep reinforcement learning problem and learn an online compensating representation to fuse with LDR inputs for HDR image generation. Moreover, we build a two-exposure dataset with reference HDR images from a public multiexposure dataset that has not yet been normalized to train and evaluate the proposed model. By assessing the built dataset, we show that our reinforcement HDR image generation significantly outperforms other competing methods under different challenging scenarios, even with limited well-exposed contents. More experimental results on a no-reference multiexposure image dataset demonstrate the generality and effectiveness of the proposed model. To the best of our knowledge, this is the first work to use a reinforcement-learning-based framework for an online compensating representation in two-exposure image fusion.
關聯 IEEE Transactions on Neural Networks and Learning Systems, pp.2162-2388
資料類型 article
DOI https://doi.org/10.1109/TNNLS.2021.3088907
dc.contributor 資科系-
dc.creator (作者) 彭彥璁-
dc.creator (作者) Peng, Yan-Tsung-
dc.creator (作者) Yin, Jia-Li-
dc.creator (作者) Chen, Bo-Hao-
dc.creator (作者) Hwang, Hau-
dc.date (日期) 2021-06-
dc.date.accessioned 23-十二月-2021 15:40:49 (UTC+8)-
dc.date.available 23-十二月-2021 15:40:49 (UTC+8)-
dc.date.issued (上傳時間) 23-十二月-2021 15:40:49 (UTC+8)-
dc.identifier.uri (URI) http://nccur.lib.nccu.edu.tw/handle/140.119/138323-
dc.description.abstract (摘要) Fusing low dynamic range (LDR) for high dynamic range (HDR) images has gained a lot of attention, especially to achieve real-world application significance when the hardware resources are limited to capture images with different exposure times. However, existing HDR image generation by picking the best parts from each LDR image often yields unsatisfactory results due to either the lack of input images or well-exposed contents. To overcome this limitation, we model the HDR image generation process in two-exposure fusion as a deep reinforcement learning problem and learn an online compensating representation to fuse with LDR inputs for HDR image generation. Moreover, we build a two-exposure dataset with reference HDR images from a public multiexposure dataset that has not yet been normalized to train and evaluate the proposed model. By assessing the built dataset, we show that our reinforcement HDR image generation significantly outperforms other competing methods under different challenging scenarios, even with limited well-exposed contents. More experimental results on a no-reference multiexposure image dataset demonstrate the generality and effectiveness of the proposed model. To the best of our knowledge, this is the first work to use a reinforcement-learning-based framework for an online compensating representation in two-exposure image fusion.-
dc.format.extent 10102150 bytes-
dc.format.mimetype application/pdf-
dc.relation (關聯) IEEE Transactions on Neural Networks and Learning Systems, pp.2162-2388-
dc.subject (關鍵詞) High dynamic range (HDR) image ; image fusion ; reinforcement learning-
dc.title (題名) Automatic Intermediate Generation With Deep Reinforcement Learning for Robust Two-Exposure Image Fusion-
dc.type (資料類型) article-
dc.identifier.doi (DOI) 10.1109/TNNLS.2021.3088907-
dc.doi.uri (DOI) https://doi.org/10.1109/TNNLS.2021.3088907-