學術產出-學位論文

文章檢視/開啟

書目匯出

Google ScholarTM

政大圖書館

引文資訊

TAIR相關學術產出

題名 基於日間合成資料之夜晚複合天候影像還原機制
Restoration Mechanism for Nighttime Composite Weather Images Using Daytime Synthetic Data
作者 張佑笙
Chang, Yu-Sheng
貢獻者 廖文宏
Liao, Wen-Hung
張佑笙
Chang, Yu-Sheng
關鍵詞 夜間影像還原
複合天氣損害影像
多任務影像還原
深度學習
Nighttime image restoration
Composite weather degraded images
Multi-task image restoration
Deep learning
日期 2023
上傳時間 1-九月-2023 15:39:42 (UTC+8)
摘要 近年來,深度學習在影像還原任務獲得了顯著的進展,為許多應用中帶來巨大的貢獻,然而在自駕車系統中,天氣條件不佳的情況下所拍攝到的影像,容易降低物件偵測演算法的準確度,影響系統對危險駕駛事件的判斷,除了受單一天氣影響外,現實世界中也會受到複合天氣的影響,所以利用單一模型,同時還原受多種天氣狀況影響的影像品質,也成為影像還原任務中的重要議題。
本研究基於物理特性,在清晰的白天影像中,添加雨痕、霧、雨滴以及前述天候排列組合,合成七種複合天氣影像作為還原目標,另外為了同時加強夜間還原情況,使用生成對抗網路,生成清晰的夜間影像。在網路架構方面,結合任務自適應機制及亮度強化模型,將多種天氣的生成影像作為訓練資料,提出一個可同時處理白天及夜間的複合天氣還原模型。
為了進一步驗證夜間還原的影像品質,本研究也合成複合天氣的夜間影像,計算加入亮度強化模型前後的還原影像品質指標,客觀的分析還原結果,除此之外,我們也使用物件偵測模型-YOLOv7,偵測道路上常見的物件,驗證損害影像經複合天氣模型還原後,能有效提升物件偵測的準確度。
In recent years, substantial progress has been made in deep learning for image restoration tasks, making noteworthy contributions to various applications. However, in self-driving systems, images captured under adverse weather conditions can considerably reduce the accuracy of object detection algorithms, impacting the system`s ability to assess dangerous driving events. In addition to being influenced by individual weather conditions, real-world scenarios are also subject to the degradation caused by compound weather conditions. Therefore, the utilization of a single model to restore image quality affected by multiple weather conditions has emerged as a crucial topic in image restoration tasks.

This thesis utilizes physics-based models to synthesize images including diverse weather conditions, including rain streaks, fog, raindrops, and their combinations. A total of seven types of composite weather images are used as restoration targets. To enhance nighttime restoration simultaneously, a generative adversarial network (GAN) is employed to generate clear nighttime images. In terms of network architecture, a task-adaptive mechanism and a brightness enhancement module are integrated. Multiple weather-degraded images are used as training data, resulting in a single weather restoration model capable of handling both daytime and nighttime scenarios.

To further verify the image quality of nighttime restoration, this thesis synthesizes composite weather nighttime images and calculates the restored image quality indicators before and after applying a brightness enhancement module. The restoration results are objectively analyzed using image quality indicators. In addition to image quality metrics, the YOLOv7 object detection model is used to detect common objects on the road, validating the effectiveness of enhancing object detection accuracy by restoring degraded images using the proposed composite weather model.
參考文獻 [1] 維基百科:類神經網路https://zh-yue.wikipedia.org/wiki/File:ArtificialNeuronModel_english.png
[2] Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep learning. MIT press.
[3] Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., ... & Bengio, Y. (2020). Generative adversarial networks. Communications of the ACM, 63(11), 139-144.
[4] Radford, A., Metz, L., & Chintala, S. (2015). Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434.
[5] Karras, T., Laine, S., & Aila, T. (2019). A style-based generator architecture for generative adversarial networks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 4401-4410).
[6] Zhu, J. Y., Park, T., Isola, P., & Efros, A. A. (2017). Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE international conference on computer vision (pp. 2223-2232).
[7] Hu, J., Shen, L., & Sun, G. (2018). Squeeze-and-excitation networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 7132-7141).
[8] Qin, X., Wang, Z., Bai, Y., Xie, X., & Jia, H. (2020, April). FFA-Net: Feature fusion attention network for single image dehazing. In Proceedings of the AAAI conference on artificial intelligence (Vol. 34, No. 07, pp. 11908-11915).
[9] 鄭可昕,基於深度學習框架之夜晚霧霾圖像模擬與復原方法評估,碩士論文,國立政治大學資訊科學系碩士班,臺北,2021.
[10] Park, Y., Jeon, M., Lee, J., & Kang, M. (2022). MCW-Net: Single image deraining with multi-level connections and wide regional non-local blocks. Signal Processing: Image Communication, 105, 116701.
[11] Wang, L. W., Liu, Z. S., Siu, W. C., & Lun, D. P. (2020). Lightening network for low-light image enhancement. IEEE Transactions on Image Processing, 29, 7984-7996.
[12] Cordts, M., Omran, M., Ramos, S., Rehfeld, T., Enzweiler, M., Benenson, R., ... & Schiele, B. (2016). The cityscapes dataset for semantic urban scene understanding. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 3213-3223).
[13] Porav, H., Musat, V. N., Bruls, T., & Newman, P. (2020). Rainy screens: Collecting rainy datasets, indoors. arXiv preprint arXiv:2003.04742.
[14] Wang, T. C., Liu, M. Y., Zhu, J. Y., Tao, A., Kautz, J., & Catanzaro, B. (2018). High-resolution image synthesis and semantic manipulation with conditional gans. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 8798-8807).
[15] Halder, S. S., Lalonde, J. F., & Charette, R. D. (2019). Physics-based rendering for improving robustness to rain. In Proceedings of the IEEE/CVF International Conference on Computer Vision (pp. 10203-10212).
[16] Garg, K., & Nayar, S. K. (2006). Photorealistic rendering of rain streaks. ACM Transactions on Graphics (TOG), 25(3), 996-1002.
[17] De Charette, R., Tamburo, R., Barnum, P. C., Rowe, A., Kanade, T., & Narasimhan, S. G. (2012, April). Fast reactive control for illumination through rain and snow. In 2012 IEEE International Conference on Computational Photography (ICCP) (pp. 1-10). IEEE.
[18] Sakaridis, C., Dai, D., & Van Gool, L. (2018). Semantic foggy scene understanding with synthetic data. International Journal of Computer Vision, 126, 973-992.
[19] Kahraman, S., & De Charette, R. (2017). Influence of fog on computer vision algorithms (Doctoral dissertation, Inria Paris).
[20] Sakaridis, C., Dai, D., & Van Gool, L. (2020). Map-guided curriculum domain adaptation and uncertainty-aware evaluation for semantic nighttime image segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(6), 3139-3153.
[21] Mușat, V., Fursa, I., Newman, P., Cuzzolin, F., & Bradley, A. (2021). Multi-weather city: Adverse weather stacking for autonomous driving. In Proceedings of the IEEE/CVF International Conference on Computer Vision (pp. 2906-2915).
[22] Zhou, J., Leong, C., Lin, M., Liao, W., & Li, C. (2022). Task adaptive network for image restoration with combined degradation factors. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (pp. 1-8).
[23] Chen, D., He, M., Fan, Q., Liao, J., Zhang, L., Hou, D., ... & Hua, G. (2019, January). Gated context aggregation network for image dehazing and deraining. In 2019 IEEE winter conference on applications of computer vision (WACV) (pp. 1375-1383). IEEE.
[24] Wang, C. Y., Bochkovskiy, A., & Liao, H. Y. M. (2022). YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors. arXiv preprint arXiv:2207.02696.
[25] Baskar, H., Chakravarthy, A. S., Garg, P., Goel, D., Raj, A. S., Kumar, K., ... & Rout, B. K. (2022). Nighttime Dehaze-Enhancement. arXiv preprint arXiv:2210.09962.
[26] Lv, F., Li, Y., & Lu, F. (2021). Attention guided low-light image enhancement with a large scale low-light simulation dataset. International Journal of Computer Vision, 129(7), 2175-2193.
[27] Wang, Z., Bovik, A. C., Sheikh, H. R., & Simoncelli, E. P. (2004). Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing, 13(4), 600-612.
[28] Cityscapes label:https://github.com/TillBeemelmanns/cityscapes-to-coco-conversion
[29] Yang, W., Tan, R. T., Feng, J., Liu, J., Guo, Z., & Yan, S. (2017). Deep joint rain detection and removal from a single image. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1357-1366).
[30] Qian, R., Tan, R. T., Yang, W., Su, J., & Liu, J. (2018). Attentive generative adversarial network for raindrop removal from a single image. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2482-2491).
[31] Zhang, J., Cao, Y., Zha, Z. J., & Tao, D. (2020, October). Nighttime dehazing with a synthetic benchmark. In Proceedings of the 28th ACM international conference on multimedia (pp. 2355-2363).
[32] Lim, W. T., Ang, K., & Loh, Y. P. (2022, December). Deep Enhancement-Object Features Fusion for Low-Light Object Detection. In Proceedings of the 4th ACM International Conference on Multimedia in Asia (pp. 1-6).
[33] Howard, A., Sandler, M., Chu, G., Chen, L. C., Chen, B., Tan, M., ... & Adam, H. (2019). Searching for mobilenetv3. In Proceedings of the IEEE/CVF international conference on computer vision (pp. 1314-1324).
[34] Valanarasu, J. M. J., Yasarla, R., & Patel, V. M. (2022). Transweather: Transformer-based restoration of images degraded by adverse weather conditions. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 2353-2363).
[35] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., & Zagoruyko, S. (2020). End-to-end object detection with transformers. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part I 16 (pp. 213-229). Springer International Publishing.
[36] Koblik, K. (2021). Simulation of rain on a windshield: Creating a real-time effect using GPGPU computing.
描述 碩士
國立政治大學
資訊科學系碩士在職專班
110971001
資料來源 http://thesis.lib.nccu.edu.tw/record/#G0110971001
資料類型 thesis
dc.contributor.advisor 廖文宏zh_TW
dc.contributor.advisor Liao, Wen-Hungen_US
dc.contributor.author (作者) 張佑笙zh_TW
dc.contributor.author (作者) Chang, Yu-Shengen_US
dc.creator (作者) 張佑笙zh_TW
dc.creator (作者) Chang, Yu-Shengen_US
dc.date (日期) 2023en_US
dc.date.accessioned 1-九月-2023 15:39:42 (UTC+8)-
dc.date.available 1-九月-2023 15:39:42 (UTC+8)-
dc.date.issued (上傳時間) 1-九月-2023 15:39:42 (UTC+8)-
dc.identifier (其他 識別碼) G0110971001en_US
dc.identifier.uri (URI) http://nccur.lib.nccu.edu.tw/handle/140.119/147096-
dc.description (描述) 碩士zh_TW
dc.description (描述) 國立政治大學zh_TW
dc.description (描述) 資訊科學系碩士在職專班zh_TW
dc.description (描述) 110971001zh_TW
dc.description.abstract (摘要) 近年來,深度學習在影像還原任務獲得了顯著的進展,為許多應用中帶來巨大的貢獻,然而在自駕車系統中,天氣條件不佳的情況下所拍攝到的影像,容易降低物件偵測演算法的準確度,影響系統對危險駕駛事件的判斷,除了受單一天氣影響外,現實世界中也會受到複合天氣的影響,所以利用單一模型,同時還原受多種天氣狀況影響的影像品質,也成為影像還原任務中的重要議題。
本研究基於物理特性,在清晰的白天影像中,添加雨痕、霧、雨滴以及前述天候排列組合,合成七種複合天氣影像作為還原目標,另外為了同時加強夜間還原情況,使用生成對抗網路,生成清晰的夜間影像。在網路架構方面,結合任務自適應機制及亮度強化模型,將多種天氣的生成影像作為訓練資料,提出一個可同時處理白天及夜間的複合天氣還原模型。
為了進一步驗證夜間還原的影像品質,本研究也合成複合天氣的夜間影像,計算加入亮度強化模型前後的還原影像品質指標,客觀的分析還原結果,除此之外,我們也使用物件偵測模型-YOLOv7,偵測道路上常見的物件,驗證損害影像經複合天氣模型還原後,能有效提升物件偵測的準確度。
zh_TW
dc.description.abstract (摘要) In recent years, substantial progress has been made in deep learning for image restoration tasks, making noteworthy contributions to various applications. However, in self-driving systems, images captured under adverse weather conditions can considerably reduce the accuracy of object detection algorithms, impacting the system`s ability to assess dangerous driving events. In addition to being influenced by individual weather conditions, real-world scenarios are also subject to the degradation caused by compound weather conditions. Therefore, the utilization of a single model to restore image quality affected by multiple weather conditions has emerged as a crucial topic in image restoration tasks.

This thesis utilizes physics-based models to synthesize images including diverse weather conditions, including rain streaks, fog, raindrops, and their combinations. A total of seven types of composite weather images are used as restoration targets. To enhance nighttime restoration simultaneously, a generative adversarial network (GAN) is employed to generate clear nighttime images. In terms of network architecture, a task-adaptive mechanism and a brightness enhancement module are integrated. Multiple weather-degraded images are used as training data, resulting in a single weather restoration model capable of handling both daytime and nighttime scenarios.

To further verify the image quality of nighttime restoration, this thesis synthesizes composite weather nighttime images and calculates the restored image quality indicators before and after applying a brightness enhancement module. The restoration results are objectively analyzed using image quality indicators. In addition to image quality metrics, the YOLOv7 object detection model is used to detect common objects on the road, validating the effectiveness of enhancing object detection accuracy by restoring degraded images using the proposed composite weather model.
en_US
dc.description.tableofcontents 摘要 i
Abstract ii
目錄 iii
圖目錄 vi
表目錄 x
第一章 緒論 1
1.1 研究背景與動機 1
1.2 研究目的 2
1.3 論文架構 3
第二章 相關研究與技術背景 4
2.1 基於深度學習之影像還原技術 4
2.1.1 卷積神經網路 5
2.1.2 生成對抗網路 8
2.1.3 注意力機制網路 10
2.2 天氣損害影像還原 10
2.3 低亮度強化模型 12
2.4 任務自適應還原模型 12
2.5 TransWeather多種天氣還原模型 13
2.6 影像品質評估指標 14
2.6.1 峰值訊噪比(PSNR) 14
2.6.2 結構相似性(SSIM) 14
2.7 物件偵測評估指標 15
2.7.1 精確率(Precision) 16
2.7.2 召回率(Recall) 16
2.7.3 Intersection over Union(IOU) 16
2.7.4 Mean Average Precision(mAP) 17
第三章 研究方法 18
3.1 基本構想 18
3.2 前期研究 19
3.2.1 影像還原資料集 19
3.2.2 日間天氣損害影像生成 20
3.2.3 夜間天氣損害影像生成 26
3.2.4 白天複合天氣還原 28
3.2.5 夜間複合天氣還原 31
3.3 研究架構 33
3.3.1 問題陳述 33
3.3.2 研究步驟 33
3.3.3 LETAN亮度強化還原模型 35
第四章 實驗結果分析 39
4.1 實驗環境 39
4.2 研究過程 40
4.2.1 不同損害強度天氣驗證 40
4.2.2 白天資料訓練LETAN 41
4.2.3 白天及夜間綜合資料訓練 44
4.3 結果分析 47
4.3.1 影像品質分析 47
4.3.2 還原後影像物件偵測結果分析 52
4.3.3 TransWeather還原模型比較 55
4.3.4 真實天氣損害影像驗證 62
第五章 結論與未來研究方向 65
5.1 結論 65
5.2 未來研究方向 66
參考文獻 68
zh_TW
dc.format.extent 5409034 bytes-
dc.format.mimetype application/pdf-
dc.source.uri (資料來源) http://thesis.lib.nccu.edu.tw/record/#G0110971001en_US
dc.subject (關鍵詞) 夜間影像還原zh_TW
dc.subject (關鍵詞) 複合天氣損害影像zh_TW
dc.subject (關鍵詞) 多任務影像還原zh_TW
dc.subject (關鍵詞) 深度學習zh_TW
dc.subject (關鍵詞) Nighttime image restorationen_US
dc.subject (關鍵詞) Composite weather degraded imagesen_US
dc.subject (關鍵詞) Multi-task image restorationen_US
dc.subject (關鍵詞) Deep learningen_US
dc.title (題名) 基於日間合成資料之夜晚複合天候影像還原機制zh_TW
dc.title (題名) Restoration Mechanism for Nighttime Composite Weather Images Using Daytime Synthetic Dataen_US
dc.type (資料類型) thesisen_US
dc.relation.reference (參考文獻) [1] 維基百科:類神經網路https://zh-yue.wikipedia.org/wiki/File:ArtificialNeuronModel_english.png
[2] Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep learning. MIT press.
[3] Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., ... & Bengio, Y. (2020). Generative adversarial networks. Communications of the ACM, 63(11), 139-144.
[4] Radford, A., Metz, L., & Chintala, S. (2015). Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434.
[5] Karras, T., Laine, S., & Aila, T. (2019). A style-based generator architecture for generative adversarial networks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 4401-4410).
[6] Zhu, J. Y., Park, T., Isola, P., & Efros, A. A. (2017). Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE international conference on computer vision (pp. 2223-2232).
[7] Hu, J., Shen, L., & Sun, G. (2018). Squeeze-and-excitation networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 7132-7141).
[8] Qin, X., Wang, Z., Bai, Y., Xie, X., & Jia, H. (2020, April). FFA-Net: Feature fusion attention network for single image dehazing. In Proceedings of the AAAI conference on artificial intelligence (Vol. 34, No. 07, pp. 11908-11915).
[9] 鄭可昕,基於深度學習框架之夜晚霧霾圖像模擬與復原方法評估,碩士論文,國立政治大學資訊科學系碩士班,臺北,2021.
[10] Park, Y., Jeon, M., Lee, J., & Kang, M. (2022). MCW-Net: Single image deraining with multi-level connections and wide regional non-local blocks. Signal Processing: Image Communication, 105, 116701.
[11] Wang, L. W., Liu, Z. S., Siu, W. C., & Lun, D. P. (2020). Lightening network for low-light image enhancement. IEEE Transactions on Image Processing, 29, 7984-7996.
[12] Cordts, M., Omran, M., Ramos, S., Rehfeld, T., Enzweiler, M., Benenson, R., ... & Schiele, B. (2016). The cityscapes dataset for semantic urban scene understanding. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 3213-3223).
[13] Porav, H., Musat, V. N., Bruls, T., & Newman, P. (2020). Rainy screens: Collecting rainy datasets, indoors. arXiv preprint arXiv:2003.04742.
[14] Wang, T. C., Liu, M. Y., Zhu, J. Y., Tao, A., Kautz, J., & Catanzaro, B. (2018). High-resolution image synthesis and semantic manipulation with conditional gans. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 8798-8807).
[15] Halder, S. S., Lalonde, J. F., & Charette, R. D. (2019). Physics-based rendering for improving robustness to rain. In Proceedings of the IEEE/CVF International Conference on Computer Vision (pp. 10203-10212).
[16] Garg, K., & Nayar, S. K. (2006). Photorealistic rendering of rain streaks. ACM Transactions on Graphics (TOG), 25(3), 996-1002.
[17] De Charette, R., Tamburo, R., Barnum, P. C., Rowe, A., Kanade, T., & Narasimhan, S. G. (2012, April). Fast reactive control for illumination through rain and snow. In 2012 IEEE International Conference on Computational Photography (ICCP) (pp. 1-10). IEEE.
[18] Sakaridis, C., Dai, D., & Van Gool, L. (2018). Semantic foggy scene understanding with synthetic data. International Journal of Computer Vision, 126, 973-992.
[19] Kahraman, S., & De Charette, R. (2017). Influence of fog on computer vision algorithms (Doctoral dissertation, Inria Paris).
[20] Sakaridis, C., Dai, D., & Van Gool, L. (2020). Map-guided curriculum domain adaptation and uncertainty-aware evaluation for semantic nighttime image segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(6), 3139-3153.
[21] Mușat, V., Fursa, I., Newman, P., Cuzzolin, F., & Bradley, A. (2021). Multi-weather city: Adverse weather stacking for autonomous driving. In Proceedings of the IEEE/CVF International Conference on Computer Vision (pp. 2906-2915).
[22] Zhou, J., Leong, C., Lin, M., Liao, W., & Li, C. (2022). Task adaptive network for image restoration with combined degradation factors. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (pp. 1-8).
[23] Chen, D., He, M., Fan, Q., Liao, J., Zhang, L., Hou, D., ... & Hua, G. (2019, January). Gated context aggregation network for image dehazing and deraining. In 2019 IEEE winter conference on applications of computer vision (WACV) (pp. 1375-1383). IEEE.
[24] Wang, C. Y., Bochkovskiy, A., & Liao, H. Y. M. (2022). YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors. arXiv preprint arXiv:2207.02696.
[25] Baskar, H., Chakravarthy, A. S., Garg, P., Goel, D., Raj, A. S., Kumar, K., ... & Rout, B. K. (2022). Nighttime Dehaze-Enhancement. arXiv preprint arXiv:2210.09962.
[26] Lv, F., Li, Y., & Lu, F. (2021). Attention guided low-light image enhancement with a large scale low-light simulation dataset. International Journal of Computer Vision, 129(7), 2175-2193.
[27] Wang, Z., Bovik, A. C., Sheikh, H. R., & Simoncelli, E. P. (2004). Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing, 13(4), 600-612.
[28] Cityscapes label:https://github.com/TillBeemelmanns/cityscapes-to-coco-conversion
[29] Yang, W., Tan, R. T., Feng, J., Liu, J., Guo, Z., & Yan, S. (2017). Deep joint rain detection and removal from a single image. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1357-1366).
[30] Qian, R., Tan, R. T., Yang, W., Su, J., & Liu, J. (2018). Attentive generative adversarial network for raindrop removal from a single image. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2482-2491).
[31] Zhang, J., Cao, Y., Zha, Z. J., & Tao, D. (2020, October). Nighttime dehazing with a synthetic benchmark. In Proceedings of the 28th ACM international conference on multimedia (pp. 2355-2363).
[32] Lim, W. T., Ang, K., & Loh, Y. P. (2022, December). Deep Enhancement-Object Features Fusion for Low-Light Object Detection. In Proceedings of the 4th ACM International Conference on Multimedia in Asia (pp. 1-6).
[33] Howard, A., Sandler, M., Chu, G., Chen, L. C., Chen, B., Tan, M., ... & Adam, H. (2019). Searching for mobilenetv3. In Proceedings of the IEEE/CVF international conference on computer vision (pp. 1314-1324).
[34] Valanarasu, J. M. J., Yasarla, R., & Patel, V. M. (2022). Transweather: Transformer-based restoration of images degraded by adverse weather conditions. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 2353-2363).
[35] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., & Zagoruyko, S. (2020). End-to-end object detection with transformers. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part I 16 (pp. 213-229). Springer International Publishing.
[36] Koblik, K. (2021). Simulation of rain on a windshield: Creating a real-time effect using GPGPU computing.
zh_TW