學術產出-學位論文
文章檢視/開啟
書目匯出
-
題名 基於深度和梯度角變化的交通違規偵測與車速預測
Traffic Violation Detection based on Depth and Gradient Angle Change and Speed Prediction作者 劉宸羽
Liu, Chen-Yu貢獻者 彭彥璁
Peng, Yan-Tsung
劉宸羽
Liu, Chen-Yu關鍵詞 交通違規檢測
智能交通系統
車輛行為分析
Vehicle action analysis
Traffic violation detection system日期 2024 上傳時間 1-三月-2024 13:42:07 (UTC+8) 摘要 近年來,民眾檢舉的車輛違規案例越來越多,缺乏自動判斷檢舉影片中的車輛是否違規的系統導致警方的業務繁重。為了解決科技執法的問題,我們設計了一個基於物件偵測和深度變化的判斷系統,並加入了物件追蹤的先驗演算法來輔助判斷,我們還建立了自己的交通違規數據集。本文中,基於物件偵測和深度變化判斷系統的目標是解決兩種類型的交通違規行為:(1) 紅燈直行 (2) 紅燈左右轉,所提出的交通違規檢測系統包括兩個主要部分:違規目標跟踪(VTT:Violation Target Tracking)和目標行為分析(TAA:Target Action Analysis),我們首先在VTT階段檢測紅綠燈和車輛的車牌,並獲得它們的軌跡位置和深度;接下來,我們對車輛深度和方位角的變化進行建模以判斷交通違規行為。實驗結果表明,我們的違規檢測系統對於所有違規案例平均可以達到 76\% 的真實準確度和 81\% 的條件準確度。另外我們應用道路線偵測模型和數學分析方法來預測拍攝者行車紀錄器的車速,進而檢測是否超越道路速限。
In recent times, there's been a surge in public reports on vehicle violations, straining law enforcement due to the absence of an automated system to assess reported video evidence for potential traffic breaches. To overcome these challenges in technology-driven law enforcement, we designed a system based on object detection, depth variation assessment, and prior object tracking algorithms. Our unique traffic violation dataset supports this system, which targets two specific violations: running red lights and making turns at red lights. It consists of Violation Target Tracking (VTT) and Target Action Analysis (TAA). The VTT phase identifies traffic lights and license plates, tracking their trajectory and depth. Modeling changes in vehicle depth and azimuth enables us to determine violations. Our system achieves an average accuracy of 0.76 for true positives and 0.81 for conditional accuracy in detecting violations. Furthermore, employing road line detection models and mathematical analysis enables us to predict vehicle speeds from dashcam footage, aiding in identifying speeding violations beyond road limits.參考文獻 [1] Radhakrishna Achanta, Sheila Hemami, Francisco Estrada, and Sabine Susstrunk. Frequency-tuned salient region detection. In 2009 IEEE conference on computer vision and pattern recognition, pages 1597–1604. IEEE, 2009. [2] Chemesse ennehar Bencheriet, S Belhadad, and M Menai. Vehicle tracking and trajectory estimation for detection of traffic road violation. In Advanced Computational Paradigms and Hybrid Intelligent Computing: Proceedings of ICACCP 2021, pages 561–571. Springer, 2022. [3] Alex Bewley, Zongyuan Ge, Lionel Ott, Fabio Ramos, and Ben Upcroft. Simple online and realtime tracking. In 2016 IEEE international conference on image processing (ICIP), pages 3464–3468. IEEE, 2016. [4] Alexey Bochkovskiy, Chien-Yao Wang, and Hong-Yuan Mark Liao. Yolov4: Optimal speed and accuracy of object detection. arXiv preprint arXiv:2004.10934, 2020. [5] Weitao Feng, Deyi Ji, Yiru Wang, Shuorong Chang, Hansheng Ren, and Weihao Gan. Challenges on large scale surveillance video analysis. In Proceedings of the IEEE conference on computer vision and pattern recognition workshops, pages 69–76, 2018. [6] Zhengyang Feng, Shaohua Guo, Xin Tan, Ke Xu, Min Wang, and Lizhuang Ma. Rethinking efficient lane detection via curve modeling. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 17062–17070, 2022. [7] Mauro Fernández-Sanjurjo, Manuel Mucientes, and Víctor M Brea. A real-time processing stand-alone multiple object visual tracking system. In International Conference on Computer Analysis of Images and Patterns, pages 64–74. Springer, 2019. [8] Ruben J Franklin et al. Traffic signal violation detection using artificial intelligence and deep learning. In 2020 5th international conference on communication and electronics systems (ICCES), pages 839–844. IEEE, 2020. [9] Ross Girshick. Fast r-cnn. In Proceedings of the IEEE international conference on computer vision, pages 1440–1448, 2015. [10] Ross Girshick, Jeff Donahue, Trevor Darrell, and Jitendra Malik. Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 580–587, 2014. [11] Aman Goyal, Dev Agarwal, Anbumani Subramanian, CV Jawahar, Ravi Kiran Sarvadevabhatla, and Rohit Saluja. Detecting, tracking and counting motorcycle rider traffic violations on unconstrained roads. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4303–4312, 2022. [12] Kristen Grauman and Trevor Darrell. The pyramid match kernel: Efficient learning with sets of features. Journal of Machine Learning Research, 8(4), 2007. [13] Kaiming He, Georgia Gkioxari, Piotr Dollár, and Ross Girshick. Mask r-cnn. In Proceedings of the IEEE international conference on computer vision, pages 2961–2969, 2017. [14] Glenn Jocher, Ayush Chaurasia, Alex Stoken, Jirka Borovec, Yonghye Kwon, Jiacong Fang, Kalen Michael, Diego Montes, Jebastin Nadar, Piotr Skalski, et al. ultralytics/yolov5: v6. 1-tensorrt, tensorflow edge tpu and openvino export and inference. Zenodo, 2022. [15] Zhengqi Li and Noah Snavely. Megadepth: Learning single-view depth prediction from internet photos. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2041–2050, 2018. [16] Tsung-Yi Lin, Michael Maire, Serge J. Belongie, Lubomir D. Bourdev, Ross B. Girshick, James Hays, Pietro Perona, Deva Ramanan, Piotr Doll’a r, and C. Lawrence Zitnick. Microsoft COCO: common objects in context. CoRR, abs/1405.0312, 2014. [17] Joseph Redmon, Santosh Divvala, Ross Girshick, and Ali Farhadi. You only look once: Unified, real-time object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 779–788, 2016. [18] Joseph Redmon and Ali Farhadi. Yolo9000: better, faster, stronger. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 7263–7271, 2017. [19] Joseph Redmon and Ali Farhadi. Yolov3: An incremental improvement. arXiv preprint arXiv:1804.02767, 2018. [20] Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster r-cnn: Towards realtime object detection with region proposal networks. Advances in neural information processing systems, 28, 2015. [21] Aniruddha Tonge, Shashank Chandak, Renuka Khiste, Usman Khan, and LA Bewoor. Traffic rules violation detection using deep learning. In 2020 4th in ternational conference on electronics, communication and aerospace technology (ICECA), pages 1250–1257. IEEE, 2020. [22] Balaji Veeramani, John W Raymond, and Pritam Chanda. Deepsort: deep convolutional networks for sorting haploid maize seeds. BMC bioinformatics, 19(9):1–9, 2018. [23] Nicolai Wojke, Alex Bewley, and Dietrich Paulus. Simple online and realtime tracking with a deep association metric. In 2017 IEEE international conference on image processing (ICIP), pages 3645–3649. IEEE, 2017. [24] Gang Yan, Ming Yu, Yang Yu, and Longfei Fan. Real-time vehicle detection using histograms of oriented gradients and adaboost classification. Optik, 127(19):7941–7951, 2016. [25] Zhou Yu-Xiang. The demon of justice reports, traffic violations tripled in five years, reaching a record high of nearly 6 million. https://today.line.me/tw/v2/article/ZjWpzr. [26] Liwei Zhang, Jiahong Lai, Zenghui Zhang, Zhen Deng, Bingwei He, and Yucheng He. Multimodal multiobject tracking by fusing deep appearance features and motion information. Complexity, 2020:1–10, 2020. 描述 碩士
國立政治大學
資訊科學系
110753129資料來源 http://thesis.lib.nccu.edu.tw/record/#G0110753129 資料類型 thesis dc.contributor.advisor 彭彥璁 zh_TW dc.contributor.advisor Peng, Yan-Tsung en_US dc.contributor.author (作者) 劉宸羽 zh_TW dc.contributor.author (作者) Liu, Chen-Yu en_US dc.creator (作者) 劉宸羽 zh_TW dc.creator (作者) Liu, Chen-Yu en_US dc.date (日期) 2024 en_US dc.date.accessioned 1-三月-2024 13:42:07 (UTC+8) - dc.date.available 1-三月-2024 13:42:07 (UTC+8) - dc.date.issued (上傳時間) 1-三月-2024 13:42:07 (UTC+8) - dc.identifier (其他 識別碼) G0110753129 en_US dc.identifier.uri (URI) https://nccur.lib.nccu.edu.tw/handle/140.119/150170 - dc.description (描述) 碩士 zh_TW dc.description (描述) 國立政治大學 zh_TW dc.description (描述) 資訊科學系 zh_TW dc.description (描述) 110753129 zh_TW dc.description.abstract (摘要) 近年來,民眾檢舉的車輛違規案例越來越多,缺乏自動判斷檢舉影片中的車輛是否違規的系統導致警方的業務繁重。為了解決科技執法的問題,我們設計了一個基於物件偵測和深度變化的判斷系統,並加入了物件追蹤的先驗演算法來輔助判斷,我們還建立了自己的交通違規數據集。本文中,基於物件偵測和深度變化判斷系統的目標是解決兩種類型的交通違規行為:(1) 紅燈直行 (2) 紅燈左右轉,所提出的交通違規檢測系統包括兩個主要部分:違規目標跟踪(VTT:Violation Target Tracking)和目標行為分析(TAA:Target Action Analysis),我們首先在VTT階段檢測紅綠燈和車輛的車牌,並獲得它們的軌跡位置和深度;接下來,我們對車輛深度和方位角的變化進行建模以判斷交通違規行為。實驗結果表明,我們的違規檢測系統對於所有違規案例平均可以達到 76\% 的真實準確度和 81\% 的條件準確度。另外我們應用道路線偵測模型和數學分析方法來預測拍攝者行車紀錄器的車速,進而檢測是否超越道路速限。 zh_TW dc.description.abstract (摘要) In recent times, there's been a surge in public reports on vehicle violations, straining law enforcement due to the absence of an automated system to assess reported video evidence for potential traffic breaches. To overcome these challenges in technology-driven law enforcement, we designed a system based on object detection, depth variation assessment, and prior object tracking algorithms. Our unique traffic violation dataset supports this system, which targets two specific violations: running red lights and making turns at red lights. It consists of Violation Target Tracking (VTT) and Target Action Analysis (TAA). The VTT phase identifies traffic lights and license plates, tracking their trajectory and depth. Modeling changes in vehicle depth and azimuth enables us to determine violations. Our system achieves an average accuracy of 0.76 for true positives and 0.81 for conditional accuracy in detecting violations. Furthermore, employing road line detection models and mathematical analysis enables us to predict vehicle speeds from dashcam footage, aiding in identifying speeding violations beyond road limits. en_US dc.description.tableofcontents 第一章 Introduction 1 第一節 Motivation and Challenges 1 第二節 Contribution 4 第二章 Related Work 6 第一節 Object detection 6 第二節 Object tracking 8 第三節 Traffic violation detection 11 第三章 Approach 19 第一節 Violation Target Tracking (VTT) 19 第二節 Target Action Analysis (TAA) 21 第二之一節 Red-Light Running (RLR) 24 第二之一節 Turning on Red Light (TRL) 25 第三節 Vehicle speed prediction system 26 第三之一節 Prediction result optimization 29 第四章 Experimental Results 34 第一節 Detecting running red light and turning on red light 34 第一之一節 Dataset 34 第一之二節 Training/Test Environments and Metrics 35 第一之三節 Test Results 35 第二節 Speed estimation 37 第三節 Limitations and Future Work 38 第五章 Conclusion 42 參考書目 Reference 43 zh_TW dc.format.extent 9397279 bytes - dc.format.mimetype application/pdf - dc.source.uri (資料來源) http://thesis.lib.nccu.edu.tw/record/#G0110753129 en_US dc.subject (關鍵詞) 交通違規檢測 zh_TW dc.subject (關鍵詞) 智能交通系統 zh_TW dc.subject (關鍵詞) 車輛行為分析 zh_TW dc.subject (關鍵詞) Vehicle action analysis en_US dc.subject (關鍵詞) Traffic violation detection system en_US dc.title (題名) 基於深度和梯度角變化的交通違規偵測與車速預測 zh_TW dc.title (題名) Traffic Violation Detection based on Depth and Gradient Angle Change and Speed Prediction en_US dc.type (資料類型) thesis en_US dc.relation.reference (參考文獻) [1] Radhakrishna Achanta, Sheila Hemami, Francisco Estrada, and Sabine Susstrunk. Frequency-tuned salient region detection. In 2009 IEEE conference on computer vision and pattern recognition, pages 1597–1604. IEEE, 2009. [2] Chemesse ennehar Bencheriet, S Belhadad, and M Menai. Vehicle tracking and trajectory estimation for detection of traffic road violation. In Advanced Computational Paradigms and Hybrid Intelligent Computing: Proceedings of ICACCP 2021, pages 561–571. Springer, 2022. [3] Alex Bewley, Zongyuan Ge, Lionel Ott, Fabio Ramos, and Ben Upcroft. Simple online and realtime tracking. In 2016 IEEE international conference on image processing (ICIP), pages 3464–3468. IEEE, 2016. [4] Alexey Bochkovskiy, Chien-Yao Wang, and Hong-Yuan Mark Liao. Yolov4: Optimal speed and accuracy of object detection. arXiv preprint arXiv:2004.10934, 2020. [5] Weitao Feng, Deyi Ji, Yiru Wang, Shuorong Chang, Hansheng Ren, and Weihao Gan. Challenges on large scale surveillance video analysis. In Proceedings of the IEEE conference on computer vision and pattern recognition workshops, pages 69–76, 2018. [6] Zhengyang Feng, Shaohua Guo, Xin Tan, Ke Xu, Min Wang, and Lizhuang Ma. Rethinking efficient lane detection via curve modeling. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 17062–17070, 2022. [7] Mauro Fernández-Sanjurjo, Manuel Mucientes, and Víctor M Brea. A real-time processing stand-alone multiple object visual tracking system. In International Conference on Computer Analysis of Images and Patterns, pages 64–74. Springer, 2019. [8] Ruben J Franklin et al. Traffic signal violation detection using artificial intelligence and deep learning. In 2020 5th international conference on communication and electronics systems (ICCES), pages 839–844. IEEE, 2020. [9] Ross Girshick. Fast r-cnn. In Proceedings of the IEEE international conference on computer vision, pages 1440–1448, 2015. [10] Ross Girshick, Jeff Donahue, Trevor Darrell, and Jitendra Malik. Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 580–587, 2014. [11] Aman Goyal, Dev Agarwal, Anbumani Subramanian, CV Jawahar, Ravi Kiran Sarvadevabhatla, and Rohit Saluja. Detecting, tracking and counting motorcycle rider traffic violations on unconstrained roads. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4303–4312, 2022. [12] Kristen Grauman and Trevor Darrell. The pyramid match kernel: Efficient learning with sets of features. Journal of Machine Learning Research, 8(4), 2007. [13] Kaiming He, Georgia Gkioxari, Piotr Dollár, and Ross Girshick. Mask r-cnn. In Proceedings of the IEEE international conference on computer vision, pages 2961–2969, 2017. [14] Glenn Jocher, Ayush Chaurasia, Alex Stoken, Jirka Borovec, Yonghye Kwon, Jiacong Fang, Kalen Michael, Diego Montes, Jebastin Nadar, Piotr Skalski, et al. ultralytics/yolov5: v6. 1-tensorrt, tensorflow edge tpu and openvino export and inference. Zenodo, 2022. [15] Zhengqi Li and Noah Snavely. Megadepth: Learning single-view depth prediction from internet photos. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2041–2050, 2018. [16] Tsung-Yi Lin, Michael Maire, Serge J. Belongie, Lubomir D. Bourdev, Ross B. Girshick, James Hays, Pietro Perona, Deva Ramanan, Piotr Doll’a r, and C. Lawrence Zitnick. Microsoft COCO: common objects in context. CoRR, abs/1405.0312, 2014. [17] Joseph Redmon, Santosh Divvala, Ross Girshick, and Ali Farhadi. You only look once: Unified, real-time object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 779–788, 2016. [18] Joseph Redmon and Ali Farhadi. Yolo9000: better, faster, stronger. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 7263–7271, 2017. [19] Joseph Redmon and Ali Farhadi. Yolov3: An incremental improvement. arXiv preprint arXiv:1804.02767, 2018. [20] Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster r-cnn: Towards realtime object detection with region proposal networks. Advances in neural information processing systems, 28, 2015. [21] Aniruddha Tonge, Shashank Chandak, Renuka Khiste, Usman Khan, and LA Bewoor. Traffic rules violation detection using deep learning. In 2020 4th in ternational conference on electronics, communication and aerospace technology (ICECA), pages 1250–1257. IEEE, 2020. [22] Balaji Veeramani, John W Raymond, and Pritam Chanda. Deepsort: deep convolutional networks for sorting haploid maize seeds. BMC bioinformatics, 19(9):1–9, 2018. [23] Nicolai Wojke, Alex Bewley, and Dietrich Paulus. Simple online and realtime tracking with a deep association metric. In 2017 IEEE international conference on image processing (ICIP), pages 3645–3649. IEEE, 2017. [24] Gang Yan, Ming Yu, Yang Yu, and Longfei Fan. Real-time vehicle detection using histograms of oriented gradients and adaboost classification. Optik, 127(19):7941–7951, 2016. [25] Zhou Yu-Xiang. The demon of justice reports, traffic violations tripled in five years, reaching a record high of nearly 6 million. https://today.line.me/tw/v2/article/ZjWpzr. [26] Liwei Zhang, Jiahong Lai, Zenghui Zhang, Zhen Deng, Bingwei He, and Yucheng He. Multimodal multiobject tracking by fusing deep appearance features and motion information. Complexity, 2020:1–10, 2020. zh_TW