Publications-Theses

Article View/Open

Publication Export

Google ScholarTM

NCCU Library

Citation Infomation

Related Publications in TAIR

題名 基於多尺度融合機制去除摩爾紋的網路模型
Image Demoireing using Multi-scale Fusion Networks
作者 侯治祥
Hou, Chih-Hsiang
貢獻者 彭彥璁
Peng, Yan­-Tsung
侯治祥
Hou, Chih-Hsiang
關鍵詞 影像處理
影像還原
影像去除摩爾紋
Image processing
Image restoration
Image moiré removal
日期 2022
上傳時間 2-Sep-2022 15:05:57 (UTC+8)
摘要 因為被拍攝的螢幕的顯示器的像素排列,與手機像素的排列出現干涉現象,兩個排列在疊加的過程中就會形成出現了彩色和形狀不規律的條紋,就是摩爾紋。與其他影像還原的任務不同的是,去除摩爾紋的困難點在於,摩爾紋出現的頻率域很廣,不只存在於高頻,也同時出現在低頻中。此外,摩爾紋的形狀是不規則的,摩爾紋的色彩也會產生扭曲,所以是一個有挑戰性的任務。本論文提出一個基於多尺度融合機制去除摩爾紋的網路模型和利用摩爾紋的轉移做資料擴增的方法,可以增強去摩爾文的表現,根據實驗的結果顯示,我們的模型比去摩爾紋領域方法表現的更好。
Taking images on a digital display may cause a visually annoying optical effect, called moiré, which degrades image visual quality. Because the pixel arrangement of the display of the screen being photographed interferes with the pixel arrangement of the phone, the two arrangements are superimposed in the process of forming the color and shape irregularities of the stripes, which are moire patterns. Unlike other image restoration tasks, the difficulty in removing moire patterns is that moire patterns appear in a wide range of frequencies with irregular shapes and rainbow-like colors. Thus, removing moiré patterns is a challenging task. In this thesis, we propose an Image Demoiréing Multi-scale Fusion network (DMSFN) to remove Moiré patterns and a method for data augmentation using the transfer of Moiré patterns, which can enhance the performance of demoiréing. According to the experimental results, our model performs favorably against state-of-the-art demoiréing methods on benchmark datasets.
參考文獻 [1] Yujing Sun, Yizhou Yu, and Wenping Wang. Moiré photo restoration using multiresolution convolutional neural networks. IEEE Transactions on Image Processing, 27(8):4160–4172, 2018.
[2] Shanghui Yang, Yajing Lei, Shuangyu Xiong, and Wei Wang. High resolution demoire network. In 2020 IEEE International Conference on Image Processing (ICIP), pages 888–892. IEEE, 2020.
[3] Bin He, Ce Wang, Boxin Shi, and Ling-Yu Duan. Mop moire patterns using mopnet. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 2424–2432, 2019.
[4] Xi Cheng, Zhenyong Fu, and Jian Yang. Multi-scale dynamic feature encoding network for image demoiréing. In 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pages 3486–3493. IEEE, 2019.
[5] Xiaotong Luo, Jiangtao Zhang, Ming Hong, Yanyun Qu, Yuan Xie, and Cuihua Li. Deep wavelet network with domain adaptation for single image demoireing. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pages 420–421, 2020.
[6] Lin Liu, Shanxin Yuan, Jianzhuang Liu, Liping Bao, Gregory Slabaugh, and Qi Tian. Self-adaptively learning to demoiré from focused and defocused image pairs. arXiv
preprint arXiv:2011.02055, 2020.
[7] Lin Liu, Jianzhuang Liu, Shanxin Yuan, Gregory Slabaugh, Aleš Leonardis, Wen-gang Zhou, and Qi Tian. Wavelet-based dual-branch network for image demoiréing. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XIII 16, pages 86–102. Springer, 2020.
[8] Bin He, Ce Wang, Boxin Shi, and Ling-Yu Duan. Fhde 2 net: Full high definition demoireing network. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XXII 16, pages 713–729. Springer, 2020.
[9] Bolun Zheng, Shanxin Yuan, Gregory Slabaugh, and Ales Leonardis. Image demoireing with learnable bandpass filters. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3636–3645, 2020.
[10] Shanxin Yuan, Radu Timofte, Gregory Slabaugh, and Aleš Leonardis. Aim 2019 challenge on image demoireing: Dataset and study. In 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pages 3526–3533. IEEE, 2019.
[11] Shanxin Yuan, Radu Timofte, Ales Leonardis, and Gregory Slabaugh. Ntire 2020 challenge on image demoireing: Methods and results. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pages 460–461, 2020.
[12] Zhendong Wang, Xiaodong Cun, Jianmin Bao, Wengang Zhou, Jianzhuang Liu, and Houqiang Li. Uformer: A general u-shaped transformer for image restoration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 17683–17693, 2022.
[13] Jingyu Yang, Xue Zhang, Changrui Cai, and Kun Li. Demoiréing for screen-shot images with multi-channel layer decomposition. In 2017 IEEE Visual Communications and Image Processing (VCIP), pages 1–4. IEEE, 2017.
[14] Fanglei Liu, Jingyu Yang, and Huanjing Yue. Moiré pattern removal from texture images via low-rank and sparse matrix decomposition. In 2015 Visual Communications and Image Processing (VCIP), pages 1–4. IEEE, 2015.
[15] Jingyu Yang, Fanglei Liu, Huanjing Yue, Xiaomei Fu, Chunping Hou, and Feng Wu. Textured image demoiréing via signal decomposition and guided filtering. IEEE Transactions on Image Processing, 26(7):3528–3541, 2017.
[16] Zhouping Wei, Jian Wang, Helen Nichol, Sheldon Wiebe, and Dean Chapman. A median-gaussian filtering framework for moiré pattern noise removal from x-ray microscopy image. Micron, 43(2-3):170–176, 2012.
[17] Fu-Jen* Tsai, Yan-Tsung* Peng, Yen-Yu Lin, Chung-Chi Tsai, and Chia-Wen Lin. Banet: Blur-aware attention networks for dynamic scene deblurring. In arXiv preprint arXiv:2101.07518, 2021.
[18] Syed Waqas Zamir, Aditya Arora, Salman Khan, Munawar Hayat, Fahad Shahbaz Khan, Ming-Hsuan Yang, and Ling Shao. Multi-stage progressive image restoration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 14821–14831, 2021.
[19] Ke Sun, Bin Xiao, Dong Liu, and Jingdong Wang. Deep high-resolution representation learning for human pose estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5693–5703, 2019.
[20] Dejia Xu, Yihao Chu, and Qingyan Sun. Moiré pattern removal via attentive fractal network. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pages 472–473, 2020.
[21] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. In International Conference on Medical image computing and computer-assisted intervention, pages 234–241. Springer,2015.
[22] Bolun Zheng, Shanxin Yuan, Chenggang Yan, Xiang Tian, Jiyong Zhang, Yaoqi Sun, Lin Liu, Ales Leonardis, and Greg Slabaugh. Learning frequency domain priors for image demoireing. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021.
[23] Hongyi Zhang, Moustapha Cisse, Yann N Dauphin, and David Lopez-Paz. mixup: Beyond empirical risk minimization. arXiv preprint arXiv:1710.09412, 2017.
[24] Shashanka Venkataramanan, Ewa Kijak, Laurent Amsaleg, and Yannis Avrithis. Alignmixup: Improving representations by interpolating aligned features. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022.
[25] Jaejun Yoo, Namhyuk Ahn, and Kyung-Ah Sohn. Rethinking data augmentation for image super-resolution: A comprehensive analysis and a new strategy. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8375–8384, 2020.
[26] Fisher Yu and Vladlen Koltun. Multi-scale context aggregation by dilated convolutions. arXiv preprint arXiv:1511.07122, 2015.
[27] Gao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kilian Q Weinberger. Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4700–4708, 2017.
[28] Jie Hu, Li Shen, and Gang Sun. Squeeze-and-excitation networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 7132–7141, 2018.
[29] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. Advances in neural information processing systems, 30, 2017.
[30] Sanghyun Woo, Jongchan Park, Joon-Young Lee, and In So Kweon. Cbam: Convolutional block attention module. In Proceedings of the European conference on computer vision (ECCV), pages 3–19, 2018.
[31] Qibin Hou, Li Zhang, Ming-Ming Cheng, and Jiashi Feng. Strip pooling: Rethinking spatial pooling for scene parsing. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4003–4012, 2020.
[32] Phong Tran, Anh Tuan Tran, Quynh Phung, and Minh Hoai. Explore image deblurring via encoded blur kernel space. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 11956–11965, 2021.
[33] Hang Zhao, Orazio Gallo, Iuri Frosio, and Jan Kautz. Loss functions for image restoration with neural networks. IEEE Transactions on computational imaging, 3(1):47–57, 2016.
[34] Justin Johnson, Alexandre Alahi, and Li Fei-Fei. Perceptual losses for real-time style transfer and super-resolution. In European conference on computer vision, pages 694–711. Springer, 2016.
[35] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
[36] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pages 248–255. Ieee, 2009.
[37] Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems, 32, 2019.
[38] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
[39] Wenhan Yang, Robby T Tan, Jiashi Feng, Jiaying Liu, Zongming Guo, and Shuicheng Yan. Deep joint rain detection and removal from a single image. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1357–1366, 2017.
描述 碩士
國立政治大學
資訊科學系
109753149
資料來源 http://thesis.lib.nccu.edu.tw/record/#G0109753149
資料類型 thesis
dc.contributor.advisor 彭彥璁zh_TW
dc.contributor.advisor Peng, Yan­-Tsungen_US
dc.contributor.author (Authors) 侯治祥zh_TW
dc.contributor.author (Authors) Hou, Chih-Hsiangen_US
dc.creator (作者) 侯治祥zh_TW
dc.creator (作者) Hou, Chih-Hsiangen_US
dc.date (日期) 2022en_US
dc.date.accessioned 2-Sep-2022 15:05:57 (UTC+8)-
dc.date.available 2-Sep-2022 15:05:57 (UTC+8)-
dc.date.issued (上傳時間) 2-Sep-2022 15:05:57 (UTC+8)-
dc.identifier (Other Identifiers) G0109753149en_US
dc.identifier.uri (URI) http://nccur.lib.nccu.edu.tw/handle/140.119/141643-
dc.description (描述) 碩士zh_TW
dc.description (描述) 國立政治大學zh_TW
dc.description (描述) 資訊科學系zh_TW
dc.description (描述) 109753149zh_TW
dc.description.abstract (摘要) 因為被拍攝的螢幕的顯示器的像素排列,與手機像素的排列出現干涉現象,兩個排列在疊加的過程中就會形成出現了彩色和形狀不規律的條紋,就是摩爾紋。與其他影像還原的任務不同的是,去除摩爾紋的困難點在於,摩爾紋出現的頻率域很廣,不只存在於高頻,也同時出現在低頻中。此外,摩爾紋的形狀是不規則的,摩爾紋的色彩也會產生扭曲,所以是一個有挑戰性的任務。本論文提出一個基於多尺度融合機制去除摩爾紋的網路模型和利用摩爾紋的轉移做資料擴增的方法,可以增強去摩爾文的表現,根據實驗的結果顯示,我們的模型比去摩爾紋領域方法表現的更好。zh_TW
dc.description.abstract (摘要) Taking images on a digital display may cause a visually annoying optical effect, called moiré, which degrades image visual quality. Because the pixel arrangement of the display of the screen being photographed interferes with the pixel arrangement of the phone, the two arrangements are superimposed in the process of forming the color and shape irregularities of the stripes, which are moire patterns. Unlike other image restoration tasks, the difficulty in removing moire patterns is that moire patterns appear in a wide range of frequencies with irregular shapes and rainbow-like colors. Thus, removing moiré patterns is a challenging task. In this thesis, we propose an Image Demoiréing Multi-scale Fusion network (DMSFN) to remove Moiré patterns and a method for data augmentation using the transfer of Moiré patterns, which can enhance the performance of demoiréing. According to the experimental results, our model performs favorably against state-of-the-art demoiréing methods on benchmark datasets.en_US
dc.description.tableofcontents 摘要 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . i
Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ii
Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii
List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v
List of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii
1 INTRODUCTION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
2 RELATE WORK . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.1 Traditional Image Demoiréing Methods . . . . . . . . . . . . . . . . . . 6
2.2 Deep learning Based Image Demoiréing Methods . . . . . . . . . . . . . 6
2.2.1 Demoiréing in Spatial Domain . . . . . . . . . . . . . . . . . . . 7
2.2.2 Demoiréing in Frequency domain . . . . . . . . . . . . . . . . . 11
2.3 Data Augmentation for Moiré pattern . . . . . . . . . . . . . . . . . . . 14
3 PROPOSED METHOD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.1 Network Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.1.1 Dilated-Dense Attention . . . . . . . . . . . . . . . . . . . . . . 17
3.1.2 Demoiréing Multi-Scale Feature Interaction . . . . . . . . . . . . 18
3.1.3 Multi Kernel Strip Pooing . . . . . . . . . . . . . . . . . . . . . 21
3.1.4 Supervised Attention Module . . . . . . . . . . . . . . . . . . . 23
3.2 Data Augmentation for Moiré patterns . . . . . . . . . . . . . . . . . . . 24
3.3 Loss Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
4 DATASET . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
4.1 Real-World data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
4.2 Synthetic data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
5 EXPERIMENTS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
5.1 Implementation Detail . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
5.2 Full-Reference Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
5.3 Quantitative Evaluations . . . . . . . . . . . . . . . . . . . . . . . . . . 34
5.4 Qualitative Evaluations . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
5.5 Comparisons on Model Size, Runtime, and Required Operations . . . . . 46
5.6 Ablation study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
5.7 Data Augmentation for Moiré Images . . . . . . . . . . . . . . . . . . . 48
6 CONCLUSION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
zh_TW
dc.format.extent 73804469 bytes-
dc.format.mimetype application/pdf-
dc.source.uri (資料來源) http://thesis.lib.nccu.edu.tw/record/#G0109753149en_US
dc.subject (關鍵詞) 影像處理zh_TW
dc.subject (關鍵詞) 影像還原zh_TW
dc.subject (關鍵詞) 影像去除摩爾紋zh_TW
dc.subject (關鍵詞) Image processingen_US
dc.subject (關鍵詞) Image restorationen_US
dc.subject (關鍵詞) Image moiré removalen_US
dc.title (題名) 基於多尺度融合機制去除摩爾紋的網路模型zh_TW
dc.title (題名) Image Demoireing using Multi-scale Fusion Networksen_US
dc.type (資料類型) thesisen_US
dc.relation.reference (參考文獻) [1] Yujing Sun, Yizhou Yu, and Wenping Wang. Moiré photo restoration using multiresolution convolutional neural networks. IEEE Transactions on Image Processing, 27(8):4160–4172, 2018.
[2] Shanghui Yang, Yajing Lei, Shuangyu Xiong, and Wei Wang. High resolution demoire network. In 2020 IEEE International Conference on Image Processing (ICIP), pages 888–892. IEEE, 2020.
[3] Bin He, Ce Wang, Boxin Shi, and Ling-Yu Duan. Mop moire patterns using mopnet. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 2424–2432, 2019.
[4] Xi Cheng, Zhenyong Fu, and Jian Yang. Multi-scale dynamic feature encoding network for image demoiréing. In 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pages 3486–3493. IEEE, 2019.
[5] Xiaotong Luo, Jiangtao Zhang, Ming Hong, Yanyun Qu, Yuan Xie, and Cuihua Li. Deep wavelet network with domain adaptation for single image demoireing. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pages 420–421, 2020.
[6] Lin Liu, Shanxin Yuan, Jianzhuang Liu, Liping Bao, Gregory Slabaugh, and Qi Tian. Self-adaptively learning to demoiré from focused and defocused image pairs. arXiv
preprint arXiv:2011.02055, 2020.
[7] Lin Liu, Jianzhuang Liu, Shanxin Yuan, Gregory Slabaugh, Aleš Leonardis, Wen-gang Zhou, and Qi Tian. Wavelet-based dual-branch network for image demoiréing. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XIII 16, pages 86–102. Springer, 2020.
[8] Bin He, Ce Wang, Boxin Shi, and Ling-Yu Duan. Fhde 2 net: Full high definition demoireing network. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XXII 16, pages 713–729. Springer, 2020.
[9] Bolun Zheng, Shanxin Yuan, Gregory Slabaugh, and Ales Leonardis. Image demoireing with learnable bandpass filters. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3636–3645, 2020.
[10] Shanxin Yuan, Radu Timofte, Gregory Slabaugh, and Aleš Leonardis. Aim 2019 challenge on image demoireing: Dataset and study. In 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pages 3526–3533. IEEE, 2019.
[11] Shanxin Yuan, Radu Timofte, Ales Leonardis, and Gregory Slabaugh. Ntire 2020 challenge on image demoireing: Methods and results. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pages 460–461, 2020.
[12] Zhendong Wang, Xiaodong Cun, Jianmin Bao, Wengang Zhou, Jianzhuang Liu, and Houqiang Li. Uformer: A general u-shaped transformer for image restoration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 17683–17693, 2022.
[13] Jingyu Yang, Xue Zhang, Changrui Cai, and Kun Li. Demoiréing for screen-shot images with multi-channel layer decomposition. In 2017 IEEE Visual Communications and Image Processing (VCIP), pages 1–4. IEEE, 2017.
[14] Fanglei Liu, Jingyu Yang, and Huanjing Yue. Moiré pattern removal from texture images via low-rank and sparse matrix decomposition. In 2015 Visual Communications and Image Processing (VCIP), pages 1–4. IEEE, 2015.
[15] Jingyu Yang, Fanglei Liu, Huanjing Yue, Xiaomei Fu, Chunping Hou, and Feng Wu. Textured image demoiréing via signal decomposition and guided filtering. IEEE Transactions on Image Processing, 26(7):3528–3541, 2017.
[16] Zhouping Wei, Jian Wang, Helen Nichol, Sheldon Wiebe, and Dean Chapman. A median-gaussian filtering framework for moiré pattern noise removal from x-ray microscopy image. Micron, 43(2-3):170–176, 2012.
[17] Fu-Jen* Tsai, Yan-Tsung* Peng, Yen-Yu Lin, Chung-Chi Tsai, and Chia-Wen Lin. Banet: Blur-aware attention networks for dynamic scene deblurring. In arXiv preprint arXiv:2101.07518, 2021.
[18] Syed Waqas Zamir, Aditya Arora, Salman Khan, Munawar Hayat, Fahad Shahbaz Khan, Ming-Hsuan Yang, and Ling Shao. Multi-stage progressive image restoration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 14821–14831, 2021.
[19] Ke Sun, Bin Xiao, Dong Liu, and Jingdong Wang. Deep high-resolution representation learning for human pose estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5693–5703, 2019.
[20] Dejia Xu, Yihao Chu, and Qingyan Sun. Moiré pattern removal via attentive fractal network. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pages 472–473, 2020.
[21] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. In International Conference on Medical image computing and computer-assisted intervention, pages 234–241. Springer,2015.
[22] Bolun Zheng, Shanxin Yuan, Chenggang Yan, Xiang Tian, Jiyong Zhang, Yaoqi Sun, Lin Liu, Ales Leonardis, and Greg Slabaugh. Learning frequency domain priors for image demoireing. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021.
[23] Hongyi Zhang, Moustapha Cisse, Yann N Dauphin, and David Lopez-Paz. mixup: Beyond empirical risk minimization. arXiv preprint arXiv:1710.09412, 2017.
[24] Shashanka Venkataramanan, Ewa Kijak, Laurent Amsaleg, and Yannis Avrithis. Alignmixup: Improving representations by interpolating aligned features. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022.
[25] Jaejun Yoo, Namhyuk Ahn, and Kyung-Ah Sohn. Rethinking data augmentation for image super-resolution: A comprehensive analysis and a new strategy. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8375–8384, 2020.
[26] Fisher Yu and Vladlen Koltun. Multi-scale context aggregation by dilated convolutions. arXiv preprint arXiv:1511.07122, 2015.
[27] Gao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kilian Q Weinberger. Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4700–4708, 2017.
[28] Jie Hu, Li Shen, and Gang Sun. Squeeze-and-excitation networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 7132–7141, 2018.
[29] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. Advances in neural information processing systems, 30, 2017.
[30] Sanghyun Woo, Jongchan Park, Joon-Young Lee, and In So Kweon. Cbam: Convolutional block attention module. In Proceedings of the European conference on computer vision (ECCV), pages 3–19, 2018.
[31] Qibin Hou, Li Zhang, Ming-Ming Cheng, and Jiashi Feng. Strip pooling: Rethinking spatial pooling for scene parsing. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4003–4012, 2020.
[32] Phong Tran, Anh Tuan Tran, Quynh Phung, and Minh Hoai. Explore image deblurring via encoded blur kernel space. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 11956–11965, 2021.
[33] Hang Zhao, Orazio Gallo, Iuri Frosio, and Jan Kautz. Loss functions for image restoration with neural networks. IEEE Transactions on computational imaging, 3(1):47–57, 2016.
[34] Justin Johnson, Alexandre Alahi, and Li Fei-Fei. Perceptual losses for real-time style transfer and super-resolution. In European conference on computer vision, pages 694–711. Springer, 2016.
[35] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
[36] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pages 248–255. Ieee, 2009.
[37] Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems, 32, 2019.
[38] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
[39] Wenhan Yang, Robby T Tan, Jiashi Feng, Jiaying Liu, Zongming Guo, and Shuicheng Yan. Deep joint rain detection and removal from a single image. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1357–1366, 2017.
zh_TW
dc.identifier.doi (DOI) 10.6814/NCCU202201419en_US