Publications-Theses
Article View/Open
Publication Export
-
題名 基於深度直方圖網路之水下影像還原模型
Underwater Image Restoration using Histogram-based Deep Networks作者 陳彥蓉
Chen, Yen-Rong貢獻者 彭彥璁
Peng, Yan-Tsung
陳彥蓉
Chen, Yen-Rong關鍵詞 影像處理
影像還原
直方圖
深度學習
Image processing
Image restoration
Histogram
Deep learning日期 2022 上傳時間 2-Dec-2022 15:20:32 (UTC+8) 摘要 水下的環境複雜,能見度低,當我們拍攝水下物體或生物的照片時,總會產生模糊似霧或是水色失真的問題,導致看不清楚水下的狀況。由於光在水下傳播時的吸收、散射與衰減,導致水下圖像存在嚴重的色偏、模糊與低對比度的情況,因此我們提出了一個基於深度直方圖網路之水下影像還原的模型,應用深度學習的概念學習圖像的直方圖分布,學習好的水下圖像的直方圖分佈,來生成所需的直方圖,以增強圖像對比度和解決偏色問題。再者,我們結合了一個局部區塊優化的模型,進一步加強影像的視覺表現。此外,我們提出的網路結構設計,具有執行速度快的優點。透過實驗證明,我們提出的方法不僅可以完全地恢復水下圖像,而且在水下圖像恢復和增強的最新方法中表現良好。
The underwater environment is complex, and its visibility is low. When we take photos of underwater objects or creatures, there will always be blurry fog or water color distortion, making it difficult to see the underwater conditions. Due to the absorption, scattering, and attenuation of propagated light, underwater images are prone to severe color casts, blurriness, and low contrast. Therefore, we propose a model for underwater image restoration based on a deep histogram model, learning histogram distributions of good underwater images to produce the desired histogram for enhancing image contrast and resolving color cast problems. Furthermore, we combine a local optimization model to further increase the visual performance of the image. In addition, our proposed network structure design has the advantage of fast execution speed. Experimental results demonstrate that the proposed method performs favorably against state-of-the-art approaches for underwater image restoration and enhancement.參考文獻 [1] Y. Wang, J. Zhang, Y. Cao, and Z. Wang, “A deep cnn method for underwater image enhancement,” in Proc. Int’l Conf. Image Processing, 2017.[2] P. Liu, G. Wang, H. Qi, C. Zhang, H. Zheng, and Z. Yu, “Underwater image enhancement with a deep residual framework,” IEEE Access, 2019.[3] Y. Guo, H. Li, and P. Zhuang, “Underwater image enhancement using a multiscale dense generative adversarial network,” J. Oceanic Engineering, 2019.[4] C. Li, C. Guo, W. Ren, R. Cong, J. Hou, S. Kwong, and D. Tao, “An underwater image enhancement benchmark dataset and beyond,” IEEE Trans. on Image Processing, 2019.[5] C. Li, S. Anwar, and F. Porikli, “Underwater scene prior inspired deep underwater image and video enhancement,” Pattern Recognition, 2020.[6] C. Li, S. Anwar, J. Hou, R. Cong, C. Guo, and W. Ren, “Underwater image enhancement via medium transmission-guided multi-color space embedding,” IEEE Trans. on Image Processing, 2021.[7] R. Liu, X. Fan, M. Zhu, M. Hou, and Z. Luo, “Real-world underwater enhancement: Challenges, benchmarks, and solutions under natural light,” IEEE Trans. on Circuits and Systems for Video Technology, 2020.[8] H. Li, J. Li, and W. Wang, “A fusion adversarial underwater image enhancement network with a public test dataset,” arXiv preprint arXiv:1906.06819, 2019.[9] D. Berman, D. Levy, S. Avidan, and T. Treibitz, “Underwater single image color restoration using haze-lines and a new quantitative dataset,” IEEE Trans. on Pattern Analysis and Machine Intelligence, 2020.[10] C. O. Ancuti, C. Ancuti, C. De Vleeschouwer, and P. Bekaert, “Color balance and fusion for underwater image enhancement,” IEEE Trans. on Image Processing, 2017.[11] Y.-T. Peng and P. C. Cosman, “Underwater image restoration based on image blurriness and light absorption,” IEEE Trans. on Image Processing, 2017.[12] Y.-T. Peng, K. Cao, and P. C. Cosman, “Generalization of the dark channel prior for single image restoration,” IEEE Trans. on Image Processing, 2018.[13] W. Song, Y. Wang, D. Huang, A. Liotta, and C. Perra, “Enhancement of underwater images with statistical model of background light and optimization of transmission map,” IEEE Trans. on Broadcasting, 2020.[14] P. Zhuang, J. Wu, F. Porikli, and C. Li, “Underwater image enhancement with hyperlaplacian reflectance priors,” IEEE Trans. on Image Processing, 2022.[15] J. R. V. Zaneveld, “Light and water: Radiative transfer in natural waters,” 1995.[16] M. J. Islam, Y. Xia, and J. Sattar, “Fast underwater image enhancement for improved visual perception,” J. Robotics and Automation letters, 2020.[17] R. Hummel, “Image enhancement by histogram transformation,” Unknown, 1975.[18] A. Dudhane, P. Hambarde, P. Patil, and S. Murala, “Deep underwater image restoration and beyond,” Signal Processing Letters, 2020.[19] M. J. Islam, P. Luo, and J. Sattar, “Simultaneous enhancement and superresolution of underwater imagery for improved visual perception,” arXiv preprint arXiv:2002.01155, 2020.[20] K. Cao, Y.-T. Peng, and P. C. Cosman, “Underwater image restoration using deep networks to estimate background light and scene depth,” in Proc. Southwest Symposium on Image Analysis and Interpretation, 2018.[21] J. Li, K. A. Skinner, R. M. Eustice, and M. Johnson-Roberson, “Watergan: Unsupervised generative network to enable real-time color correction of monocular underwater images,” J. Robotics and Automation letters, 2017.[22] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” Advances in neural information processing systems, 2014.[23] D. Coltuc, P. Bolon, and J.-M. Chassery, “Exact histogram specification,” IEEE Trans. on Image Processing, 2006.[24] C. Ancuti, C. O. Ancuti, T. Haber, and P. Bekaert, “Enhancing underwater images and videos by fusion,” in Proc. Conf. Computer Vision and Pattern Recognition, 2012.[25] X. Fu, Z. Fan, M. Ling, Y. Huang, and X. Ding, “Two-step approach for single underwater image enhancement,” in Proc. Int’l Symp. Intelligent Signal Processing and Communication Systems, 2017.[26] E. H. Land and J. J. McCann, “Lightness and retinex theory,” Josa, 1971.[27] X. Fu, P. Zhuang, Y. Huang, Y. Liao, X.-P. Zhang, and X. Ding, “A retinex-basedenhancing approach for single underwater image,” in Proc. Int’l Conf. Image Processing, 2014.[28] S. Zhang, T. Wang, J. Dong, and H. Yu, “Underwater image enhancement via extended multi-scale retinex,” Neurocomputing, 2017.[29] K. He, J. Sun, and X. Tang, “Single image haze removal using dark channel prior,” IEEE Trans. on Pattern Analysis and Machine Intelligence, 2010.[30] J. Y. Chiang and Y.-C. Chen, “Underwater image enhancement by wavelength compensation and dehazing,” IEEE Trans. on Image Processing, 2011.[31] P. L. Drews, E. R. Nascimento, S. S. Botelho, and M. F. M. Campos, “Underwater depth estimation and image restoration based on single images,” J. Computer Graphics and Applications, 2016.[32] N. Carlevaris-Bianco, A. Mohan, and R. M. Eustice, “Initial results in underwater single image dehazing,” in Oceans 2010 Mts/IEEE Seattle, 2010.[33] A. Galdran, D. Pardo, A. Picón, and A. Alvarez-Gila, “Automatic red-channel underwater image restoration,” J. Visual Communication and Image Representation, 2015.[34] Y.-T. Peng, X. Zhao, and P. C. Cosman, “Single underwater image enhancement using depth estimation based on blurriness,” in Proc. Int’l Conf. Image Processing, 2015.[35] C. Li, J. Guo, S. Chen, Y. Tang, Y. Pang, and J. Wang, “Underwater image restoration based on minimum information loss principle and optical properties of underwater imaging,” in Proc. Int’l Conf. Image Processing, 2016.[36] C.-Y. Li, J.-C. Guo, R.-M. Cong, Y.-W. Pang, and B. Wang, “Underwater image enhancement by dehazing with minimum information loss and histogram distribution prior,” IEEE Trans. on Image Processing, 2016.[37] Y. Wang, H. Liu, and L.-P. Chau, “Single underwater image restoration using adaptive attenuation-curve prior,” IEEE Transactions on Circuits and Systems I: Regular Papers, 2017.[38] G. Hou, X. Zhao, Z. Pan, H. Yang, L. Tan, and J. Li, “Benchmarking underwater image enhancement and restoration, and beyond,” IEEE Access, 2020.[39] J. Xu, J. Sun, and C. Zhang, “Non-linear algorithm for contrast enhancement for image using wavelet neural network,” in 2006 9th International Conference on Control, Automation, Robotics and Vision, 2006.[40] A. K. Alexandridis and A. D. Zapranis, “Wavelet neural networks: A practical guide,” Neural Networks, 2013.[41] C. Li, J. Guo, and C. Guo, “Emerging from water: Underwater image color correction based on weakly supervised color transfer,” Signal Processing Letters, 2018.[42] J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, “Unpaired image-to-image translation using cycle-consistent adversarial networks,” in Proc. Int’l Conf. Computer Vision, 2017.[43] J. Kim, J. K. Lee, and K. M. Lee, “Accurate image super-resolution using very deep convolutional networks,” in Proc. Conf. Computer Vision and Pattern Recognition, 2016.[44] X. Liu, Z. Gao, and B. M. Chen, “Mlfcgan: Multilevel feature fusion-based conditional gan for underwater image color correction,” J. Geoscience and Remote Sensing Letters, 2019.[45] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, “Attention is all you need,” Advances in neural information processing systems, 2017.[46] A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly et al., “An image is worth 16x16 words: Transformers for image recognition at scale,” arXiv preprint arXiv:2010.11929, 2020.[47] Z. Wang, X. Cun, J. Bao, W. Zhou, J. Liu, and H. Li, “Uformer: A general u-shaped transformer for image restoration,” in Proc. Conf. Computer Vision and Pattern Recognition, 2022.[48] A. G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, and H. Adam, “Mobilenets: Efficient convolutional neural networks for mobile vision applications,” arXiv preprint arXiv:1704.04861, 2017.[49] R. Qian, R. T. Tan, W. Yang, J. Su, and J. Liu, “Attentive generative adversarial network for raindrop removal from a single image,” in Proc. Conf. Computer Vision and Pattern Recognition, 2018.[50] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “Imagenet: A large-scale hierarchical image database,” in Proc. Conf. Computer Vision and Pattern Recognition, 2009.[51] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv preprint arXiv:1409.1556, 2014.[52] C. Fabbri, M. J. Islam, and J. Sattar, “Enhancing underwater imagery using generative adversarial networks,” in Proc. Int’l Conf. on Robotics and Automation, 2018.[53] J. Xiao, J. Hays, K. A. Ehinger, A. Oliva, and A. Torralba, “Sun database: Largescale scene recognition from abbey to zoo,” in Proc. Conf. Computer Vision and Pattern Recognition, 2010.[54] K. Panetta, C. Gao, and S. Agaian, “Human-visual-system-inspired underwater image quality measures,” J. Oceanic Engineering, 2015.[55] M. Yang and A. Sowmya, “An underwater color image quality evaluation metric,” IEEE Trans. on Image Processing, 2015.[56] S. Wang, K. Ma, H. Yeganeh, Z. Wang, and W. Lin, “A patch-structure representation method for quality assessment of contrast changed images,” Signal Processing Letters, 2015.[57] A. Mittal, R. Soundararajan, and A. C. Bovik, “Making a “completely blind”image quality analyzer,” Signal Processing Letters, 2012. 描述 碩士
國立政治大學
資訊科學系
109753204資料來源 http://thesis.lib.nccu.edu.tw/record/#G0109753204 資料類型 thesis dc.contributor.advisor 彭彥璁 zh_TW dc.contributor.advisor Peng, Yan-Tsung en_US dc.contributor.author (Authors) 陳彥蓉 zh_TW dc.contributor.author (Authors) Chen, Yen-Rong en_US dc.creator (作者) 陳彥蓉 zh_TW dc.creator (作者) Chen, Yen-Rong en_US dc.date (日期) 2022 en_US dc.date.accessioned 2-Dec-2022 15:20:32 (UTC+8) - dc.date.available 2-Dec-2022 15:20:32 (UTC+8) - dc.date.issued (上傳時間) 2-Dec-2022 15:20:32 (UTC+8) - dc.identifier (Other Identifiers) G0109753204 en_US dc.identifier.uri (URI) http://nccur.lib.nccu.edu.tw/handle/140.119/142641 - dc.description (描述) 碩士 zh_TW dc.description (描述) 國立政治大學 zh_TW dc.description (描述) 資訊科學系 zh_TW dc.description (描述) 109753204 zh_TW dc.description.abstract (摘要) 水下的環境複雜,能見度低,當我們拍攝水下物體或生物的照片時,總會產生模糊似霧或是水色失真的問題,導致看不清楚水下的狀況。由於光在水下傳播時的吸收、散射與衰減,導致水下圖像存在嚴重的色偏、模糊與低對比度的情況,因此我們提出了一個基於深度直方圖網路之水下影像還原的模型,應用深度學習的概念學習圖像的直方圖分布,學習好的水下圖像的直方圖分佈,來生成所需的直方圖,以增強圖像對比度和解決偏色問題。再者,我們結合了一個局部區塊優化的模型,進一步加強影像的視覺表現。此外,我們提出的網路結構設計,具有執行速度快的優點。透過實驗證明,我們提出的方法不僅可以完全地恢復水下圖像,而且在水下圖像恢復和增強的最新方法中表現良好。 zh_TW dc.description.abstract (摘要) The underwater environment is complex, and its visibility is low. When we take photos of underwater objects or creatures, there will always be blurry fog or water color distortion, making it difficult to see the underwater conditions. Due to the absorption, scattering, and attenuation of propagated light, underwater images are prone to severe color casts, blurriness, and low contrast. Therefore, we propose a model for underwater image restoration based on a deep histogram model, learning histogram distributions of good underwater images to produce the desired histogram for enhancing image contrast and resolving color cast problems. Furthermore, we combine a local optimization model to further increase the visual performance of the image. In addition, our proposed network structure design has the advantage of fast execution speed. Experimental results demonstrate that the proposed method performs favorably against state-of-the-art approaches for underwater image restoration and enhancement. en_US dc.description.tableofcontents 誌謝 i摘要 iiiAbstract ivContents vList of Figures viiiList of Tables xiii1 Introduction 12 Related Works 52.1 Traditional Approaches 52.1.1 Non-physics-model-based approaches 52.1.2 Physics-model-based approaches 62.2 Deep Learning-based Methods 73 Methodology 123.1 Model Network Architecture 123.1.1 Histoformer 123.1.2 Multi-head Histogram Self-Attention (MHSA) 143.1.3 2D Conv-Feed-Forward Network (2D-CFF) 163.1.4 Pixel-based Quality Refiner (PQR) 163.2 Loss Function 173.2.1 L2 Loss 173.2.2 L1 Loss 183.2.3 Perceptual Loss 183.2.4 GAN Loss 193.2.5 Content Loss 193.2.6 Total Loss Function 204 Dataset 214.1 Underwater Image Enhancement Benchmark (UIEB) 214.2 Real-world Underwater Image Enhancement Dataset (RUIE) 234.2.1 Underwater Image Quality Set (UIQS) 234.2.2 Underwater Higher-level Task-driven Set (UHTS) 244.2.3 Underwater Color Cast Set (UCCS) 244.3 Underwater Test Dataset (U45) 254.4 Stereo Quantitative Underwater Image Dataset (SQUID) 255 Experimental Results 275.1 Experimental Settings 275.2 Evaluation Metrics 285.2.1 Underwater Image Quality Measure (UIQM) 285.2.2 Underwater Color Image Quality Evaluation (UCIQE) 285.2.3 Patch-based Contrast Quality Index (PCQI) 295.2.4 Natural Image Quality Evaluator (NIQE) 295.3 Quantitative Comparison 295.4 Qualitative Results 355.5 Model Size, Runtime and FLOPs 515.6 Ablation Study 545.6.1 Baseline Modedl 545.6.2 Attention mechanism 555.6.3 Loss 566 Conclusion 57Refernece 58 zh_TW dc.format.extent 26310118 bytes - dc.format.mimetype application/pdf - dc.source.uri (資料來源) http://thesis.lib.nccu.edu.tw/record/#G0109753204 en_US dc.subject (關鍵詞) 影像處理 zh_TW dc.subject (關鍵詞) 影像還原 zh_TW dc.subject (關鍵詞) 直方圖 zh_TW dc.subject (關鍵詞) 深度學習 zh_TW dc.subject (關鍵詞) Image processing en_US dc.subject (關鍵詞) Image restoration en_US dc.subject (關鍵詞) Histogram en_US dc.subject (關鍵詞) Deep learning en_US dc.title (題名) 基於深度直方圖網路之水下影像還原模型 zh_TW dc.title (題名) Underwater Image Restoration using Histogram-based Deep Networks en_US dc.type (資料類型) thesis en_US dc.relation.reference (參考文獻) [1] Y. Wang, J. Zhang, Y. Cao, and Z. Wang, “A deep cnn method for underwater image enhancement,” in Proc. Int’l Conf. Image Processing, 2017.[2] P. Liu, G. Wang, H. Qi, C. Zhang, H. Zheng, and Z. Yu, “Underwater image enhancement with a deep residual framework,” IEEE Access, 2019.[3] Y. Guo, H. Li, and P. Zhuang, “Underwater image enhancement using a multiscale dense generative adversarial network,” J. Oceanic Engineering, 2019.[4] C. Li, C. Guo, W. Ren, R. Cong, J. Hou, S. Kwong, and D. Tao, “An underwater image enhancement benchmark dataset and beyond,” IEEE Trans. on Image Processing, 2019.[5] C. Li, S. Anwar, and F. Porikli, “Underwater scene prior inspired deep underwater image and video enhancement,” Pattern Recognition, 2020.[6] C. Li, S. Anwar, J. Hou, R. Cong, C. Guo, and W. Ren, “Underwater image enhancement via medium transmission-guided multi-color space embedding,” IEEE Trans. on Image Processing, 2021.[7] R. Liu, X. Fan, M. Zhu, M. Hou, and Z. Luo, “Real-world underwater enhancement: Challenges, benchmarks, and solutions under natural light,” IEEE Trans. on Circuits and Systems for Video Technology, 2020.[8] H. Li, J. Li, and W. Wang, “A fusion adversarial underwater image enhancement network with a public test dataset,” arXiv preprint arXiv:1906.06819, 2019.[9] D. Berman, D. Levy, S. Avidan, and T. Treibitz, “Underwater single image color restoration using haze-lines and a new quantitative dataset,” IEEE Trans. on Pattern Analysis and Machine Intelligence, 2020.[10] C. O. Ancuti, C. Ancuti, C. De Vleeschouwer, and P. Bekaert, “Color balance and fusion for underwater image enhancement,” IEEE Trans. on Image Processing, 2017.[11] Y.-T. Peng and P. C. Cosman, “Underwater image restoration based on image blurriness and light absorption,” IEEE Trans. on Image Processing, 2017.[12] Y.-T. Peng, K. Cao, and P. C. Cosman, “Generalization of the dark channel prior for single image restoration,” IEEE Trans. on Image Processing, 2018.[13] W. Song, Y. Wang, D. Huang, A. Liotta, and C. Perra, “Enhancement of underwater images with statistical model of background light and optimization of transmission map,” IEEE Trans. on Broadcasting, 2020.[14] P. Zhuang, J. Wu, F. Porikli, and C. Li, “Underwater image enhancement with hyperlaplacian reflectance priors,” IEEE Trans. on Image Processing, 2022.[15] J. R. V. Zaneveld, “Light and water: Radiative transfer in natural waters,” 1995.[16] M. J. Islam, Y. Xia, and J. Sattar, “Fast underwater image enhancement for improved visual perception,” J. Robotics and Automation letters, 2020.[17] R. Hummel, “Image enhancement by histogram transformation,” Unknown, 1975.[18] A. Dudhane, P. Hambarde, P. Patil, and S. Murala, “Deep underwater image restoration and beyond,” Signal Processing Letters, 2020.[19] M. J. Islam, P. Luo, and J. Sattar, “Simultaneous enhancement and superresolution of underwater imagery for improved visual perception,” arXiv preprint arXiv:2002.01155, 2020.[20] K. Cao, Y.-T. Peng, and P. C. Cosman, “Underwater image restoration using deep networks to estimate background light and scene depth,” in Proc. Southwest Symposium on Image Analysis and Interpretation, 2018.[21] J. Li, K. A. Skinner, R. M. Eustice, and M. Johnson-Roberson, “Watergan: Unsupervised generative network to enable real-time color correction of monocular underwater images,” J. Robotics and Automation letters, 2017.[22] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” Advances in neural information processing systems, 2014.[23] D. Coltuc, P. Bolon, and J.-M. Chassery, “Exact histogram specification,” IEEE Trans. on Image Processing, 2006.[24] C. Ancuti, C. O. Ancuti, T. Haber, and P. Bekaert, “Enhancing underwater images and videos by fusion,” in Proc. Conf. Computer Vision and Pattern Recognition, 2012.[25] X. Fu, Z. Fan, M. Ling, Y. Huang, and X. Ding, “Two-step approach for single underwater image enhancement,” in Proc. Int’l Symp. Intelligent Signal Processing and Communication Systems, 2017.[26] E. H. Land and J. J. McCann, “Lightness and retinex theory,” Josa, 1971.[27] X. Fu, P. Zhuang, Y. Huang, Y. Liao, X.-P. Zhang, and X. Ding, “A retinex-basedenhancing approach for single underwater image,” in Proc. Int’l Conf. Image Processing, 2014.[28] S. Zhang, T. Wang, J. Dong, and H. Yu, “Underwater image enhancement via extended multi-scale retinex,” Neurocomputing, 2017.[29] K. He, J. Sun, and X. Tang, “Single image haze removal using dark channel prior,” IEEE Trans. on Pattern Analysis and Machine Intelligence, 2010.[30] J. Y. Chiang and Y.-C. Chen, “Underwater image enhancement by wavelength compensation and dehazing,” IEEE Trans. on Image Processing, 2011.[31] P. L. Drews, E. R. Nascimento, S. S. Botelho, and M. F. M. Campos, “Underwater depth estimation and image restoration based on single images,” J. Computer Graphics and Applications, 2016.[32] N. Carlevaris-Bianco, A. Mohan, and R. M. Eustice, “Initial results in underwater single image dehazing,” in Oceans 2010 Mts/IEEE Seattle, 2010.[33] A. Galdran, D. Pardo, A. Picón, and A. Alvarez-Gila, “Automatic red-channel underwater image restoration,” J. Visual Communication and Image Representation, 2015.[34] Y.-T. Peng, X. Zhao, and P. C. Cosman, “Single underwater image enhancement using depth estimation based on blurriness,” in Proc. Int’l Conf. Image Processing, 2015.[35] C. Li, J. Guo, S. Chen, Y. Tang, Y. Pang, and J. Wang, “Underwater image restoration based on minimum information loss principle and optical properties of underwater imaging,” in Proc. Int’l Conf. Image Processing, 2016.[36] C.-Y. Li, J.-C. Guo, R.-M. Cong, Y.-W. Pang, and B. Wang, “Underwater image enhancement by dehazing with minimum information loss and histogram distribution prior,” IEEE Trans. on Image Processing, 2016.[37] Y. Wang, H. Liu, and L.-P. Chau, “Single underwater image restoration using adaptive attenuation-curve prior,” IEEE Transactions on Circuits and Systems I: Regular Papers, 2017.[38] G. Hou, X. Zhao, Z. Pan, H. Yang, L. Tan, and J. Li, “Benchmarking underwater image enhancement and restoration, and beyond,” IEEE Access, 2020.[39] J. Xu, J. Sun, and C. Zhang, “Non-linear algorithm for contrast enhancement for image using wavelet neural network,” in 2006 9th International Conference on Control, Automation, Robotics and Vision, 2006.[40] A. K. Alexandridis and A. D. Zapranis, “Wavelet neural networks: A practical guide,” Neural Networks, 2013.[41] C. Li, J. Guo, and C. Guo, “Emerging from water: Underwater image color correction based on weakly supervised color transfer,” Signal Processing Letters, 2018.[42] J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, “Unpaired image-to-image translation using cycle-consistent adversarial networks,” in Proc. Int’l Conf. Computer Vision, 2017.[43] J. Kim, J. K. Lee, and K. M. Lee, “Accurate image super-resolution using very deep convolutional networks,” in Proc. Conf. Computer Vision and Pattern Recognition, 2016.[44] X. Liu, Z. Gao, and B. M. Chen, “Mlfcgan: Multilevel feature fusion-based conditional gan for underwater image color correction,” J. Geoscience and Remote Sensing Letters, 2019.[45] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, “Attention is all you need,” Advances in neural information processing systems, 2017.[46] A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly et al., “An image is worth 16x16 words: Transformers for image recognition at scale,” arXiv preprint arXiv:2010.11929, 2020.[47] Z. Wang, X. Cun, J. Bao, W. Zhou, J. Liu, and H. Li, “Uformer: A general u-shaped transformer for image restoration,” in Proc. Conf. Computer Vision and Pattern Recognition, 2022.[48] A. G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, and H. Adam, “Mobilenets: Efficient convolutional neural networks for mobile vision applications,” arXiv preprint arXiv:1704.04861, 2017.[49] R. Qian, R. T. Tan, W. Yang, J. Su, and J. Liu, “Attentive generative adversarial network for raindrop removal from a single image,” in Proc. Conf. Computer Vision and Pattern Recognition, 2018.[50] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “Imagenet: A large-scale hierarchical image database,” in Proc. Conf. Computer Vision and Pattern Recognition, 2009.[51] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv preprint arXiv:1409.1556, 2014.[52] C. Fabbri, M. J. Islam, and J. Sattar, “Enhancing underwater imagery using generative adversarial networks,” in Proc. Int’l Conf. on Robotics and Automation, 2018.[53] J. Xiao, J. Hays, K. A. Ehinger, A. Oliva, and A. Torralba, “Sun database: Largescale scene recognition from abbey to zoo,” in Proc. Conf. Computer Vision and Pattern Recognition, 2010.[54] K. Panetta, C. Gao, and S. Agaian, “Human-visual-system-inspired underwater image quality measures,” J. Oceanic Engineering, 2015.[55] M. Yang and A. Sowmya, “An underwater color image quality evaluation metric,” IEEE Trans. on Image Processing, 2015.[56] S. Wang, K. Ma, H. Yeganeh, Z. Wang, and W. Lin, “A patch-structure representation method for quality assessment of contrast changed images,” Signal Processing Letters, 2015.[57] A. Mittal, R. Soundararajan, and A. C. Bovik, “Making a “completely blind”image quality analyzer,” Signal Processing Letters, 2012. zh_TW dc.identifier.doi (DOI) 10.6814/NCCU202201675 en_US