Publications-Proceedings

Article View/Open

Publication Export

Google ScholarTM

NCCU Library

Citation Infomation

Related Publications in TAIR

題名 Domain-adaptive Video Deblurring via Test-time Blurring
作者 彭彥璁
Peng, Yan-Tsung;He, Jin-Ting;Tsai, Fu-Jen;Wu, Jia-Hao;Tsai, Chung-Chi;Lin, Chia-Wen;Lin, Yen-Yu
貢獻者 資訊系
關鍵詞 Video deblurring; Domain adaptation; Diffusion model
日期 2024-09
上傳時間 7-Jan-2025 09:36:33 (UTC+8)
摘要 Dynamic scene video deblurring aims to remove undesirable blurry artifacts captured during the exposure process. Although previous video deblurring methods have achieved impressive results, they suffer from significant performance drops due to the domain gap between training and testing videos, especially for those captured in real-world scenarios. To address this issue, we propose a domain adaptation scheme based on a blurring model to achieve test-time fine-tuning for deblurring models in unseen domains. Since blurred and sharp pairs are unavailable for fine-tuning during inference, our scheme can generate domain-adaptive training pairs to calibrate a deblurring model for the target domain. First, a Relative Sharpness Detection Module is proposed to identify relatively sharp regions from the blurry input images and regard them as pseudo-sharp images. Next, we utilize a blurring model to produce blurred images based on the pseudo-sharp images extracted during testing. To synthesize blurred images in compliance with the target data distribution, we propose a Domain-adaptive Blur Condition Generation Module to create domain-specific blur conditions for the blurring model. Finally, the generated pseudo-sharp and blurred pairs are used to fine-tune a deblurring model for better performance. Extensive experimental results demonstrate that our approach can significantly improve state-of-the-art video deblurring methods, providing performance gains of up to 7.54dB on various real-world video deblurring datasets. The source code is available at https://github.com/Jin-Ting-He/DADeblur.
關聯 European Conference on Computer Vision (ECCV), Springer Science+Business Media
資料類型 conference
DOI https://doi.org/10.1007/978-3-031-73404-5_8
dc.contributor 資訊系
dc.creator (作者) 彭彥璁
dc.creator (作者) Peng, Yan-Tsung;He, Jin-Ting;Tsai, Fu-Jen;Wu, Jia-Hao;Tsai, Chung-Chi;Lin, Chia-Wen;Lin, Yen-Yu
dc.date (日期) 2024-09
dc.date.accessioned 7-Jan-2025 09:36:33 (UTC+8)-
dc.date.available 7-Jan-2025 09:36:33 (UTC+8)-
dc.date.issued (上傳時間) 7-Jan-2025 09:36:33 (UTC+8)-
dc.identifier.uri (URI) https://nccur.lib.nccu.edu.tw/handle/140.119/155071-
dc.description.abstract (摘要) Dynamic scene video deblurring aims to remove undesirable blurry artifacts captured during the exposure process. Although previous video deblurring methods have achieved impressive results, they suffer from significant performance drops due to the domain gap between training and testing videos, especially for those captured in real-world scenarios. To address this issue, we propose a domain adaptation scheme based on a blurring model to achieve test-time fine-tuning for deblurring models in unseen domains. Since blurred and sharp pairs are unavailable for fine-tuning during inference, our scheme can generate domain-adaptive training pairs to calibrate a deblurring model for the target domain. First, a Relative Sharpness Detection Module is proposed to identify relatively sharp regions from the blurry input images and regard them as pseudo-sharp images. Next, we utilize a blurring model to produce blurred images based on the pseudo-sharp images extracted during testing. To synthesize blurred images in compliance with the target data distribution, we propose a Domain-adaptive Blur Condition Generation Module to create domain-specific blur conditions for the blurring model. Finally, the generated pseudo-sharp and blurred pairs are used to fine-tune a deblurring model for better performance. Extensive experimental results demonstrate that our approach can significantly improve state-of-the-art video deblurring methods, providing performance gains of up to 7.54dB on various real-world video deblurring datasets. The source code is available at https://github.com/Jin-Ting-He/DADeblur.
dc.format.extent 107 bytes-
dc.format.mimetype text/html-
dc.relation (關聯) European Conference on Computer Vision (ECCV), Springer Science+Business Media
dc.subject (關鍵詞) Video deblurring; Domain adaptation; Diffusion model
dc.title (題名) Domain-adaptive Video Deblurring via Test-time Blurring
dc.type (資料類型) conference
dc.identifier.doi (DOI) 10.1007/978-3-031-73404-5_8
dc.doi.uri (DOI) https://doi.org/10.1007/978-3-031-73404-5_8