學術產出-Theses

Article View/Open

Publication Export

Google ScholarTM

政大圖書館

Citation Infomation

題名 基於深度直方圖網路之水下影像還原模型
Underwater Image Restoration using Histogram-based Deep Networks
作者 陳彥蓉
Chen, Yen-Rong
貢獻者 彭彥璁
Peng, Yan-Tsung
陳彥蓉
Chen, Yen-Rong
關鍵詞 影像處理
影像還原
直方圖
深度學習
Image processing
Image restoration
Histogram
Deep learning
日期 2022
上傳時間 2-Dec-2022 15:20:32 (UTC+8)
摘要 水下的環境複雜,能見度低,當我們拍攝水下物體或生物的照片
時,總會產生模糊似霧或是水色失真的問題,導致看不清楚水下的狀
況。由於光在水下傳播時的吸收、散射與衰減,導致水下圖像存在嚴
重的色偏、模糊與低對比度的情況,因此我們提出了一個基於深度直
方圖網路之水下影像還原的模型,應用深度學習的概念學習圖像的直
方圖分布,學習好的水下圖像的直方圖分佈,來生成所需的直方圖,
以增強圖像對比度和解決偏色問題。再者,我們結合了一個局部區塊
優化的模型,進一步加強影像的視覺表現。此外,我們提出的網路結
構設計,具有執行速度快的優點。透過實驗證明,我們提出的方法不
僅可以完全地恢復水下圖像,而且在水下圖像恢復和增強的最新方法
中表現良好。
The underwater environment is complex, and its visibility is low. When we take photos of underwater objects or creatures, there will always be blurry fog or water color distortion, making it difficult to see the underwater conditions. Due to the absorption, scattering, and attenuation of propagated light, underwater images are prone to severe color casts, blurriness, and low contrast. Therefore, we propose a model for underwater image restoration based on a deep histogram model, learning histogram distributions of good underwater images to produce the desired histogram for enhancing image contrast and resolving color cast problems. Furthermore, we combine a local optimization model to further increase the visual performance of the image. In addition, our proposed network structure design has the advantage of fast execution speed. Experimental results demonstrate that the proposed method performs favorably against state-of-the-art approaches for underwater image restoration and enhancement.
參考文獻 [1] Y. Wang, J. Zhang, Y. Cao, and Z. Wang, “A deep cnn method for underwater image enhancement,” in Proc. Int’l Conf. Image Processing, 2017.
[2] P. Liu, G. Wang, H. Qi, C. Zhang, H. Zheng, and Z. Yu, “Underwater image enhancement with a deep residual framework,” IEEE Access, 2019.
[3] Y. Guo, H. Li, and P. Zhuang, “Underwater image enhancement using a multiscale dense generative adversarial network,” J. Oceanic Engineering, 2019.
[4] C. Li, C. Guo, W. Ren, R. Cong, J. Hou, S. Kwong, and D. Tao, “An underwater image enhancement benchmark dataset and beyond,” IEEE Trans. on Image Processing, 2019.
[5] C. Li, S. Anwar, and F. Porikli, “Underwater scene prior inspired deep underwater image and video enhancement,” Pattern Recognition, 2020.
[6] C. Li, S. Anwar, J. Hou, R. Cong, C. Guo, and W. Ren, “Underwater image enhancement via medium transmission-guided multi-color space embedding,” IEEE Trans. on Image Processing, 2021.
[7] R. Liu, X. Fan, M. Zhu, M. Hou, and Z. Luo, “Real-world underwater enhancement: Challenges, benchmarks, and solutions under natural light,” IEEE Trans. on Circuits and Systems for Video Technology, 2020.
[8] H. Li, J. Li, and W. Wang, “A fusion adversarial underwater image enhancement network with a public test dataset,” arXiv preprint arXiv:1906.06819, 2019.
[9] D. Berman, D. Levy, S. Avidan, and T. Treibitz, “Underwater single image color restoration using haze-lines and a new quantitative dataset,” IEEE Trans. on Pattern Analysis and Machine Intelligence, 2020.
[10] C. O. Ancuti, C. Ancuti, C. De Vleeschouwer, and P. Bekaert, “Color balance and fusion for underwater image enhancement,” IEEE Trans. on Image Processing, 2017.
[11] Y.-T. Peng and P. C. Cosman, “Underwater image restoration based on image blurriness and light absorption,” IEEE Trans. on Image Processing, 2017.
[12] Y.-T. Peng, K. Cao, and P. C. Cosman, “Generalization of the dark channel prior for single image restoration,” IEEE Trans. on Image Processing, 2018.
[13] W. Song, Y. Wang, D. Huang, A. Liotta, and C. Perra, “Enhancement of underwater images with statistical model of background light and optimization of transmission map,” IEEE Trans. on Broadcasting, 2020.
[14] P. Zhuang, J. Wu, F. Porikli, and C. Li, “Underwater image enhancement with hyperlaplacian reflectance priors,” IEEE Trans. on Image Processing, 2022.
[15] J. R. V. Zaneveld, “Light and water: Radiative transfer in natural waters,” 1995.
[16] M. J. Islam, Y. Xia, and J. Sattar, “Fast underwater image enhancement for improved visual perception,” J. Robotics and Automation letters, 2020.
[17] R. Hummel, “Image enhancement by histogram transformation,” Unknown, 1975.
[18] A. Dudhane, P. Hambarde, P. Patil, and S. Murala, “Deep underwater image restoration and beyond,” Signal Processing Letters, 2020.
[19] M. J. Islam, P. Luo, and J. Sattar, “Simultaneous enhancement and superresolution of underwater imagery for improved visual perception,” arXiv preprint arXiv:2002.01155, 2020.
[20] K. Cao, Y.-T. Peng, and P. C. Cosman, “Underwater image restoration using deep networks to estimate background light and scene depth,” in Proc. Southwest Symposium on Image Analysis and Interpretation, 2018.
[21] J. Li, K. A. Skinner, R. M. Eustice, and M. Johnson-Roberson, “Watergan: Unsupervised generative network to enable real-time color correction of monocular underwater images,” J. Robotics and Automation letters, 2017.
[22] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” Advances in neural information processing systems, 2014.
[23] D. Coltuc, P. Bolon, and J.-M. Chassery, “Exact histogram specification,” IEEE Trans. on Image Processing, 2006.
[24] C. Ancuti, C. O. Ancuti, T. Haber, and P. Bekaert, “Enhancing underwater images and videos by fusion,” in Proc. Conf. Computer Vision and Pattern Recognition, 2012.
[25] X. Fu, Z. Fan, M. Ling, Y. Huang, and X. Ding, “Two-step approach for single underwater image enhancement,” in Proc. Int’l Symp. Intelligent Signal Processing and Communication Systems, 2017.
[26] E. H. Land and J. J. McCann, “Lightness and retinex theory,” Josa, 1971.
[27] X. Fu, P. Zhuang, Y. Huang, Y. Liao, X.-P. Zhang, and X. Ding, “A retinex-basedenhancing approach for single underwater image,” in Proc. Int’l Conf. Image Processing, 2014.
[28] S. Zhang, T. Wang, J. Dong, and H. Yu, “Underwater image enhancement via extended multi-scale retinex,” Neurocomputing, 2017.
[29] K. He, J. Sun, and X. Tang, “Single image haze removal using dark channel prior,” IEEE Trans. on Pattern Analysis and Machine Intelligence, 2010.
[30] J. Y. Chiang and Y.-C. Chen, “Underwater image enhancement by wavelength compensation and dehazing,” IEEE Trans. on Image Processing, 2011.
[31] P. L. Drews, E. R. Nascimento, S. S. Botelho, and M. F. M. Campos, “Underwater depth estimation and image restoration based on single images,” J. Computer Graphics and Applications, 2016.
[32] N. Carlevaris-Bianco, A. Mohan, and R. M. Eustice, “Initial results in underwater single image dehazing,” in Oceans 2010 Mts/IEEE Seattle, 2010.
[33] A. Galdran, D. Pardo, A. Picón, and A. Alvarez-Gila, “Automatic red-channel underwater image restoration,” J. Visual Communication and Image Representation, 2015.
[34] Y.-T. Peng, X. Zhao, and P. C. Cosman, “Single underwater image enhancement using depth estimation based on blurriness,” in Proc. Int’l Conf. Image Processing, 2015.
[35] C. Li, J. Guo, S. Chen, Y. Tang, Y. Pang, and J. Wang, “Underwater image restoration based on minimum information loss principle and optical properties of underwater imaging,” in Proc. Int’l Conf. Image Processing, 2016.
[36] C.-Y. Li, J.-C. Guo, R.-M. Cong, Y.-W. Pang, and B. Wang, “Underwater image enhancement by dehazing with minimum information loss and histogram distribution prior,” IEEE Trans. on Image Processing, 2016.
[37] Y. Wang, H. Liu, and L.-P. Chau, “Single underwater image restoration using adaptive attenuation-curve prior,” IEEE Transactions on Circuits and Systems I: Regular Papers, 2017.
[38] G. Hou, X. Zhao, Z. Pan, H. Yang, L. Tan, and J. Li, “Benchmarking underwater image enhancement and restoration, and beyond,” IEEE Access, 2020.
[39] J. Xu, J. Sun, and C. Zhang, “Non-linear algorithm for contrast enhancement for image using wavelet neural network,” in 2006 9th International Conference on Control, Automation, Robotics and Vision, 2006.
[40] A. K. Alexandridis and A. D. Zapranis, “Wavelet neural networks: A practical guide,” Neural Networks, 2013.
[41] C. Li, J. Guo, and C. Guo, “Emerging from water: Underwater image color correction based on weakly supervised color transfer,” Signal Processing Letters, 2018.
[42] J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, “Unpaired image-to-image translation using cycle-consistent adversarial networks,” in Proc. Int’l Conf. Computer Vision, 2017.
[43] J. Kim, J. K. Lee, and K. M. Lee, “Accurate image super-resolution using very deep convolutional networks,” in Proc. Conf. Computer Vision and Pattern Recognition, 2016.
[44] X. Liu, Z. Gao, and B. M. Chen, “Mlfcgan: Multilevel feature fusion-based conditional gan for underwater image color correction,” J. Geoscience and Remote Sensing Letters, 2019.
[45] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, “Attention is all you need,” Advances in neural information processing systems, 2017.
[46] A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly et al., “An image is worth 16x16 words: Transformers for image recognition at scale,” arXiv preprint arXiv:2010.11929, 2020.
[47] Z. Wang, X. Cun, J. Bao, W. Zhou, J. Liu, and H. Li, “Uformer: A general u-shaped transformer for image restoration,” in Proc. Conf. Computer Vision and Pattern Recognition, 2022.
[48] A. G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, and H. Adam, “Mobilenets: Efficient convolutional neural networks for mobile vision applications,” arXiv preprint arXiv:1704.04861, 2017.
[49] R. Qian, R. T. Tan, W. Yang, J. Su, and J. Liu, “Attentive generative adversarial network for raindrop removal from a single image,” in Proc. Conf. Computer Vision and Pattern Recognition, 2018.
[50] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “Imagenet: A large-scale hierarchical image database,” in Proc. Conf. Computer Vision and Pattern Recognition, 2009.
[51] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv preprint arXiv:1409.1556, 2014.
[52] C. Fabbri, M. J. Islam, and J. Sattar, “Enhancing underwater imagery using generative adversarial networks,” in Proc. Int’l Conf. on Robotics and Automation, 2018.
[53] J. Xiao, J. Hays, K. A. Ehinger, A. Oliva, and A. Torralba, “Sun database: Largescale scene recognition from abbey to zoo,” in Proc. Conf. Computer Vision and Pattern Recognition, 2010.
[54] K. Panetta, C. Gao, and S. Agaian, “Human-visual-system-inspired underwater image quality measures,” J. Oceanic Engineering, 2015.
[55] M. Yang and A. Sowmya, “An underwater color image quality evaluation metric,” IEEE Trans. on Image Processing, 2015.
[56] S. Wang, K. Ma, H. Yeganeh, Z. Wang, and W. Lin, “A patch-structure representation method for quality assessment of contrast changed images,” Signal Processing Letters, 2015.
[57] A. Mittal, R. Soundararajan, and A. C. Bovik, “Making a “completely blind”image quality analyzer,” Signal Processing Letters, 2012.
描述 碩士
國立政治大學
資訊科學系
109753204
資料來源 http://thesis.lib.nccu.edu.tw/record/#G0109753204
資料類型 thesis
dc.contributor.advisor 彭彥璁zh_TW
dc.contributor.advisor Peng, Yan-Tsungen_US
dc.contributor.author (Authors) 陳彥蓉zh_TW
dc.contributor.author (Authors) Chen, Yen-Rongen_US
dc.creator (作者) 陳彥蓉zh_TW
dc.creator (作者) Chen, Yen-Rongen_US
dc.date (日期) 2022en_US
dc.date.accessioned 2-Dec-2022 15:20:32 (UTC+8)-
dc.date.available 2-Dec-2022 15:20:32 (UTC+8)-
dc.date.issued (上傳時間) 2-Dec-2022 15:20:32 (UTC+8)-
dc.identifier (Other Identifiers) G0109753204en_US
dc.identifier.uri (URI) http://nccur.lib.nccu.edu.tw/handle/140.119/142641-
dc.description (描述) 碩士zh_TW
dc.description (描述) 國立政治大學zh_TW
dc.description (描述) 資訊科學系zh_TW
dc.description (描述) 109753204zh_TW
dc.description.abstract (摘要) 水下的環境複雜,能見度低,當我們拍攝水下物體或生物的照片
時,總會產生模糊似霧或是水色失真的問題,導致看不清楚水下的狀
況。由於光在水下傳播時的吸收、散射與衰減,導致水下圖像存在嚴
重的色偏、模糊與低對比度的情況,因此我們提出了一個基於深度直
方圖網路之水下影像還原的模型,應用深度學習的概念學習圖像的直
方圖分布,學習好的水下圖像的直方圖分佈,來生成所需的直方圖,
以增強圖像對比度和解決偏色問題。再者,我們結合了一個局部區塊
優化的模型,進一步加強影像的視覺表現。此外,我們提出的網路結
構設計,具有執行速度快的優點。透過實驗證明,我們提出的方法不
僅可以完全地恢復水下圖像,而且在水下圖像恢復和增強的最新方法
中表現良好。
zh_TW
dc.description.abstract (摘要) The underwater environment is complex, and its visibility is low. When we take photos of underwater objects or creatures, there will always be blurry fog or water color distortion, making it difficult to see the underwater conditions. Due to the absorption, scattering, and attenuation of propagated light, underwater images are prone to severe color casts, blurriness, and low contrast. Therefore, we propose a model for underwater image restoration based on a deep histogram model, learning histogram distributions of good underwater images to produce the desired histogram for enhancing image contrast and resolving color cast problems. Furthermore, we combine a local optimization model to further increase the visual performance of the image. In addition, our proposed network structure design has the advantage of fast execution speed. Experimental results demonstrate that the proposed method performs favorably against state-of-the-art approaches for underwater image restoration and enhancement.en_US
dc.description.tableofcontents 誌謝 i
摘要 iii
Abstract iv
Contents v
List of Figures viii
List of Tables xiii
1 Introduction 1
2 Related Works 5
2.1 Traditional Approaches 5
2.1.1 Non-physics-model-based approaches 5
2.1.2 Physics-model-based approaches 6
2.2 Deep Learning-based Methods 7
3 Methodology 12
3.1 Model Network Architecture 12
3.1.1 Histoformer 12
3.1.2 Multi-head Histogram Self-Attention (MHSA) 14
3.1.3 2D Conv-Feed-Forward Network (2D-CFF) 16
3.1.4 Pixel-based Quality Refiner (PQR) 16
3.2 Loss Function 17
3.2.1 L2 Loss 17
3.2.2 L1 Loss 18
3.2.3 Perceptual Loss 18
3.2.4 GAN Loss 19
3.2.5 Content Loss 19
3.2.6 Total Loss Function 20
4 Dataset 21
4.1 Underwater Image Enhancement Benchmark (UIEB) 21
4.2 Real-world Underwater Image Enhancement Dataset (RUIE) 23
4.2.1 Underwater Image Quality Set (UIQS) 23
4.2.2 Underwater Higher-level Task-driven Set (UHTS) 24
4.2.3 Underwater Color Cast Set (UCCS) 24
4.3 Underwater Test Dataset (U45) 25
4.4 Stereo Quantitative Underwater Image Dataset (SQUID) 25
5 Experimental Results 27
5.1 Experimental Settings 27
5.2 Evaluation Metrics 28
5.2.1 Underwater Image Quality Measure (UIQM) 28
5.2.2 Underwater Color Image Quality Evaluation (UCIQE) 28
5.2.3 Patch-based Contrast Quality Index (PCQI) 29
5.2.4 Natural Image Quality Evaluator (NIQE) 29
5.3 Quantitative Comparison 29
5.4 Qualitative Results 35
5.5 Model Size, Runtime and FLOPs 51
5.6 Ablation Study 54
5.6.1 Baseline Modedl 54
5.6.2 Attention mechanism 55
5.6.3 Loss 56
6 Conclusion 57
Refernece 58
zh_TW
dc.format.extent 26310118 bytes-
dc.format.mimetype application/pdf-
dc.source.uri (資料來源) http://thesis.lib.nccu.edu.tw/record/#G0109753204en_US
dc.subject (關鍵詞) 影像處理zh_TW
dc.subject (關鍵詞) 影像還原zh_TW
dc.subject (關鍵詞) 直方圖zh_TW
dc.subject (關鍵詞) 深度學習zh_TW
dc.subject (關鍵詞) Image processingen_US
dc.subject (關鍵詞) Image restorationen_US
dc.subject (關鍵詞) Histogramen_US
dc.subject (關鍵詞) Deep learningen_US
dc.title (題名) 基於深度直方圖網路之水下影像還原模型zh_TW
dc.title (題名) Underwater Image Restoration using Histogram-based Deep Networksen_US
dc.type (資料類型) thesisen_US
dc.relation.reference (參考文獻) [1] Y. Wang, J. Zhang, Y. Cao, and Z. Wang, “A deep cnn method for underwater image enhancement,” in Proc. Int’l Conf. Image Processing, 2017.
[2] P. Liu, G. Wang, H. Qi, C. Zhang, H. Zheng, and Z. Yu, “Underwater image enhancement with a deep residual framework,” IEEE Access, 2019.
[3] Y. Guo, H. Li, and P. Zhuang, “Underwater image enhancement using a multiscale dense generative adversarial network,” J. Oceanic Engineering, 2019.
[4] C. Li, C. Guo, W. Ren, R. Cong, J. Hou, S. Kwong, and D. Tao, “An underwater image enhancement benchmark dataset and beyond,” IEEE Trans. on Image Processing, 2019.
[5] C. Li, S. Anwar, and F. Porikli, “Underwater scene prior inspired deep underwater image and video enhancement,” Pattern Recognition, 2020.
[6] C. Li, S. Anwar, J. Hou, R. Cong, C. Guo, and W. Ren, “Underwater image enhancement via medium transmission-guided multi-color space embedding,” IEEE Trans. on Image Processing, 2021.
[7] R. Liu, X. Fan, M. Zhu, M. Hou, and Z. Luo, “Real-world underwater enhancement: Challenges, benchmarks, and solutions under natural light,” IEEE Trans. on Circuits and Systems for Video Technology, 2020.
[8] H. Li, J. Li, and W. Wang, “A fusion adversarial underwater image enhancement network with a public test dataset,” arXiv preprint arXiv:1906.06819, 2019.
[9] D. Berman, D. Levy, S. Avidan, and T. Treibitz, “Underwater single image color restoration using haze-lines and a new quantitative dataset,” IEEE Trans. on Pattern Analysis and Machine Intelligence, 2020.
[10] C. O. Ancuti, C. Ancuti, C. De Vleeschouwer, and P. Bekaert, “Color balance and fusion for underwater image enhancement,” IEEE Trans. on Image Processing, 2017.
[11] Y.-T. Peng and P. C. Cosman, “Underwater image restoration based on image blurriness and light absorption,” IEEE Trans. on Image Processing, 2017.
[12] Y.-T. Peng, K. Cao, and P. C. Cosman, “Generalization of the dark channel prior for single image restoration,” IEEE Trans. on Image Processing, 2018.
[13] W. Song, Y. Wang, D. Huang, A. Liotta, and C. Perra, “Enhancement of underwater images with statistical model of background light and optimization of transmission map,” IEEE Trans. on Broadcasting, 2020.
[14] P. Zhuang, J. Wu, F. Porikli, and C. Li, “Underwater image enhancement with hyperlaplacian reflectance priors,” IEEE Trans. on Image Processing, 2022.
[15] J. R. V. Zaneveld, “Light and water: Radiative transfer in natural waters,” 1995.
[16] M. J. Islam, Y. Xia, and J. Sattar, “Fast underwater image enhancement for improved visual perception,” J. Robotics and Automation letters, 2020.
[17] R. Hummel, “Image enhancement by histogram transformation,” Unknown, 1975.
[18] A. Dudhane, P. Hambarde, P. Patil, and S. Murala, “Deep underwater image restoration and beyond,” Signal Processing Letters, 2020.
[19] M. J. Islam, P. Luo, and J. Sattar, “Simultaneous enhancement and superresolution of underwater imagery for improved visual perception,” arXiv preprint arXiv:2002.01155, 2020.
[20] K. Cao, Y.-T. Peng, and P. C. Cosman, “Underwater image restoration using deep networks to estimate background light and scene depth,” in Proc. Southwest Symposium on Image Analysis and Interpretation, 2018.
[21] J. Li, K. A. Skinner, R. M. Eustice, and M. Johnson-Roberson, “Watergan: Unsupervised generative network to enable real-time color correction of monocular underwater images,” J. Robotics and Automation letters, 2017.
[22] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” Advances in neural information processing systems, 2014.
[23] D. Coltuc, P. Bolon, and J.-M. Chassery, “Exact histogram specification,” IEEE Trans. on Image Processing, 2006.
[24] C. Ancuti, C. O. Ancuti, T. Haber, and P. Bekaert, “Enhancing underwater images and videos by fusion,” in Proc. Conf. Computer Vision and Pattern Recognition, 2012.
[25] X. Fu, Z. Fan, M. Ling, Y. Huang, and X. Ding, “Two-step approach for single underwater image enhancement,” in Proc. Int’l Symp. Intelligent Signal Processing and Communication Systems, 2017.
[26] E. H. Land and J. J. McCann, “Lightness and retinex theory,” Josa, 1971.
[27] X. Fu, P. Zhuang, Y. Huang, Y. Liao, X.-P. Zhang, and X. Ding, “A retinex-basedenhancing approach for single underwater image,” in Proc. Int’l Conf. Image Processing, 2014.
[28] S. Zhang, T. Wang, J. Dong, and H. Yu, “Underwater image enhancement via extended multi-scale retinex,” Neurocomputing, 2017.
[29] K. He, J. Sun, and X. Tang, “Single image haze removal using dark channel prior,” IEEE Trans. on Pattern Analysis and Machine Intelligence, 2010.
[30] J. Y. Chiang and Y.-C. Chen, “Underwater image enhancement by wavelength compensation and dehazing,” IEEE Trans. on Image Processing, 2011.
[31] P. L. Drews, E. R. Nascimento, S. S. Botelho, and M. F. M. Campos, “Underwater depth estimation and image restoration based on single images,” J. Computer Graphics and Applications, 2016.
[32] N. Carlevaris-Bianco, A. Mohan, and R. M. Eustice, “Initial results in underwater single image dehazing,” in Oceans 2010 Mts/IEEE Seattle, 2010.
[33] A. Galdran, D. Pardo, A. Picón, and A. Alvarez-Gila, “Automatic red-channel underwater image restoration,” J. Visual Communication and Image Representation, 2015.
[34] Y.-T. Peng, X. Zhao, and P. C. Cosman, “Single underwater image enhancement using depth estimation based on blurriness,” in Proc. Int’l Conf. Image Processing, 2015.
[35] C. Li, J. Guo, S. Chen, Y. Tang, Y. Pang, and J. Wang, “Underwater image restoration based on minimum information loss principle and optical properties of underwater imaging,” in Proc. Int’l Conf. Image Processing, 2016.
[36] C.-Y. Li, J.-C. Guo, R.-M. Cong, Y.-W. Pang, and B. Wang, “Underwater image enhancement by dehazing with minimum information loss and histogram distribution prior,” IEEE Trans. on Image Processing, 2016.
[37] Y. Wang, H. Liu, and L.-P. Chau, “Single underwater image restoration using adaptive attenuation-curve prior,” IEEE Transactions on Circuits and Systems I: Regular Papers, 2017.
[38] G. Hou, X. Zhao, Z. Pan, H. Yang, L. Tan, and J. Li, “Benchmarking underwater image enhancement and restoration, and beyond,” IEEE Access, 2020.
[39] J. Xu, J. Sun, and C. Zhang, “Non-linear algorithm for contrast enhancement for image using wavelet neural network,” in 2006 9th International Conference on Control, Automation, Robotics and Vision, 2006.
[40] A. K. Alexandridis and A. D. Zapranis, “Wavelet neural networks: A practical guide,” Neural Networks, 2013.
[41] C. Li, J. Guo, and C. Guo, “Emerging from water: Underwater image color correction based on weakly supervised color transfer,” Signal Processing Letters, 2018.
[42] J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, “Unpaired image-to-image translation using cycle-consistent adversarial networks,” in Proc. Int’l Conf. Computer Vision, 2017.
[43] J. Kim, J. K. Lee, and K. M. Lee, “Accurate image super-resolution using very deep convolutional networks,” in Proc. Conf. Computer Vision and Pattern Recognition, 2016.
[44] X. Liu, Z. Gao, and B. M. Chen, “Mlfcgan: Multilevel feature fusion-based conditional gan for underwater image color correction,” J. Geoscience and Remote Sensing Letters, 2019.
[45] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, “Attention is all you need,” Advances in neural information processing systems, 2017.
[46] A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly et al., “An image is worth 16x16 words: Transformers for image recognition at scale,” arXiv preprint arXiv:2010.11929, 2020.
[47] Z. Wang, X. Cun, J. Bao, W. Zhou, J. Liu, and H. Li, “Uformer: A general u-shaped transformer for image restoration,” in Proc. Conf. Computer Vision and Pattern Recognition, 2022.
[48] A. G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, and H. Adam, “Mobilenets: Efficient convolutional neural networks for mobile vision applications,” arXiv preprint arXiv:1704.04861, 2017.
[49] R. Qian, R. T. Tan, W. Yang, J. Su, and J. Liu, “Attentive generative adversarial network for raindrop removal from a single image,” in Proc. Conf. Computer Vision and Pattern Recognition, 2018.
[50] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “Imagenet: A large-scale hierarchical image database,” in Proc. Conf. Computer Vision and Pattern Recognition, 2009.
[51] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv preprint arXiv:1409.1556, 2014.
[52] C. Fabbri, M. J. Islam, and J. Sattar, “Enhancing underwater imagery using generative adversarial networks,” in Proc. Int’l Conf. on Robotics and Automation, 2018.
[53] J. Xiao, J. Hays, K. A. Ehinger, A. Oliva, and A. Torralba, “Sun database: Largescale scene recognition from abbey to zoo,” in Proc. Conf. Computer Vision and Pattern Recognition, 2010.
[54] K. Panetta, C. Gao, and S. Agaian, “Human-visual-system-inspired underwater image quality measures,” J. Oceanic Engineering, 2015.
[55] M. Yang and A. Sowmya, “An underwater color image quality evaluation metric,” IEEE Trans. on Image Processing, 2015.
[56] S. Wang, K. Ma, H. Yeganeh, Z. Wang, and W. Lin, “A patch-structure representation method for quality assessment of contrast changed images,” Signal Processing Letters, 2015.
[57] A. Mittal, R. Soundararajan, and A. C. Bovik, “Making a “completely blind”image quality analyzer,” Signal Processing Letters, 2012.
zh_TW
dc.identifier.doi (DOI) 10.6814/NCCU202201675en_US