學術產出-期刊論文

文章檢視/開啟

書目匯出

Google ScholarTM

政大圖書館

引文資訊

TAIR相關學術產出

題名 Synthetic Data-Driven Real-Time Detection Transformer Object Detection in Raining Weather Conditions
作者 甯方璽
Ning, Fang-Shii;Hao, Chen-Yu;Chen, Yao-Chung;Chen, Tai-Tien;Lai, Ting-Hsuan;Chou, Tien-Yin;Chen, Mei-Hsin
貢獻者 地政系
關鍵詞 neural style transfer (NST); CycleGAN; analytical; object detection; Real-Time Detection Transformer (RTDETR); YOLOv8; synthesizes rainy images
日期 2024-06
上傳時間 10-九月-2024 13:20:36 (UTC+8)
摘要 Images captured in rainy weather conditions often suffer from contamination, resulting in blurred or obscured objects, which can significantly impact detection performance due to the loss of identifiable texture and color information. Moreover, the quality of the detection model plays a pivotal role in determining detection outcomes. This study adopts a dual perspective, considering both pre-trained models and training data. It employs 15 image augmentation techniques, combined with neural style transfer (NST), CycleGAN, and an analytical method, to synthesize images under rainy conditions. The Real-Time Detection Transformer (RTDETR) and YOLOv8 pre-trained models are utilized to establish object detection frameworks tailored for rainy weather conditions. Testing is carried out using the DAWN (Detection in Adverse Weather Nature) dataset. The findings suggest compatibility between the pre-trained detection models and various data synthesis methods. Notably, YOLOv8 exhibits better compatibility with CycleGAN data synthesis, while RTDETR demonstrates a stronger alignment with the NST and analytical approaches. Upon the integration of synthesized rainy images into model training, RTDETR demonstrates significantly enhanced detection accuracy compared to YOLOv8, indicating a more pronounced improvement in performance. The proposed approach of combining RTDETR with NST in this study shows a significant improvement in Recall (R) and mAP50-95 by 16.35% and 15.50%, respectively, demonstrating the robust rainy weather resilience of this method. Additionally, RTDETR outperforms YOLOv8 in terms of inference speed and hardware requirements, making it easier to use and deploy in real-time applications.
關聯 Applied Sciences, Vol.14, No.11, 4910
資料類型 article
DOI https://doi.org/10.3390/app14114910
dc.contributor 地政系
dc.creator (作者) 甯方璽
dc.creator (作者) Ning, Fang-Shii;Hao, Chen-Yu;Chen, Yao-Chung;Chen, Tai-Tien;Lai, Ting-Hsuan;Chou, Tien-Yin;Chen, Mei-Hsin
dc.date (日期) 2024-06
dc.date.accessioned 10-九月-2024 13:20:36 (UTC+8)-
dc.date.available 10-九月-2024 13:20:36 (UTC+8)-
dc.date.issued (上傳時間) 10-九月-2024 13:20:36 (UTC+8)-
dc.identifier.uri (URI) https://nccur.lib.nccu.edu.tw/handle/140.119/153682-
dc.description.abstract (摘要) Images captured in rainy weather conditions often suffer from contamination, resulting in blurred or obscured objects, which can significantly impact detection performance due to the loss of identifiable texture and color information. Moreover, the quality of the detection model plays a pivotal role in determining detection outcomes. This study adopts a dual perspective, considering both pre-trained models and training data. It employs 15 image augmentation techniques, combined with neural style transfer (NST), CycleGAN, and an analytical method, to synthesize images under rainy conditions. The Real-Time Detection Transformer (RTDETR) and YOLOv8 pre-trained models are utilized to establish object detection frameworks tailored for rainy weather conditions. Testing is carried out using the DAWN (Detection in Adverse Weather Nature) dataset. The findings suggest compatibility between the pre-trained detection models and various data synthesis methods. Notably, YOLOv8 exhibits better compatibility with CycleGAN data synthesis, while RTDETR demonstrates a stronger alignment with the NST and analytical approaches. Upon the integration of synthesized rainy images into model training, RTDETR demonstrates significantly enhanced detection accuracy compared to YOLOv8, indicating a more pronounced improvement in performance. The proposed approach of combining RTDETR with NST in this study shows a significant improvement in Recall (R) and mAP50-95 by 16.35% and 15.50%, respectively, demonstrating the robust rainy weather resilience of this method. Additionally, RTDETR outperforms YOLOv8 in terms of inference speed and hardware requirements, making it easier to use and deploy in real-time applications.
dc.format.extent 99 bytes-
dc.format.mimetype text/html-
dc.relation (關聯) Applied Sciences, Vol.14, No.11, 4910
dc.subject (關鍵詞) neural style transfer (NST); CycleGAN; analytical; object detection; Real-Time Detection Transformer (RTDETR); YOLOv8; synthesizes rainy images
dc.title (題名) Synthetic Data-Driven Real-Time Detection Transformer Object Detection in Raining Weather Conditions
dc.type (資料類型) article
dc.identifier.doi (DOI) 10.3390/app14114910
dc.doi.uri (DOI) https://doi.org/10.3390/app14114910